Test Report: Docker_Linux_crio 21796

                    
                      dade2a2e0f7c4c88a0aa5c1a92ad2c1084f27e44:2025-10-25:42053
                    
                

Test fail (37/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.12
36 TestAddons/parallel/RegistryCreds 0.42
37 TestAddons/parallel/Ingress 146.06
38 TestAddons/parallel/InspektorGadget 5.34
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 29.8
42 TestAddons/parallel/Headlamp 2.59
43 TestAddons/parallel/CloudSpanner 5.29
44 TestAddons/parallel/LocalPath 8.18
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 5.28
47 TestAddons/parallel/AmdGpuDevicePlugin 5.27
97 TestFunctional/parallel/ServiceCmdConnect 602.89
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.64
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
190 TestJSONOutput/pause/Command 1.75
196 TestJSONOutput/unpause/Command 1.97
274 TestPause/serial/Pause 9.97
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.3
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.25
309 TestStartStop/group/old-k8s-version/serial/Pause 6.34
317 TestStartStop/group/no-preload/serial/Pause 6.53
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.52
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.52
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.1
338 TestStartStop/group/newest-cni/serial/Pause 6.37
346 TestStartStop/group/embed-certs/serial/Pause 6.25
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.53
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable volcano --alsologtostderr -v=1: exit status 11 (253.782361ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:08.402816   19107 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:08.403116   19107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:08.403126   19107 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:08.403129   19107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:08.403323   19107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:08.403584   19107 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:08.403949   19107 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:08.403969   19107 addons.go:606] checking whether the cluster is paused
	I1025 08:32:08.404048   19107 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:08.404064   19107 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:08.404436   19107 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:08.423779   19107 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:08.423832   19107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:08.441791   19107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:08.541515   19107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:08.541598   19107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:08.571393   19107 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:08.571425   19107 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:08.571429   19107 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:08.571433   19107 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:08.571437   19107 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:08.571441   19107 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:08.571444   19107 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:08.571447   19107 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:08.571450   19107 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:08.571463   19107 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:08.571468   19107 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:08.571471   19107 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:08.571474   19107 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:08.571477   19107 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:08.571479   19107 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:08.571487   19107 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:08.571493   19107 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:08.571497   19107 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:08.571500   19107 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:08.571502   19107 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:08.571505   19107 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:08.571507   19107 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:08.571509   19107 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:08.571512   19107 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:08.571514   19107 cri.go:89] found id: ""
	I1025 08:32:08.571564   19107 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:08.586534   19107 out.go:203] 
	W1025 08:32:08.587934   19107 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:08.587965   19107 out.go:285] * 
	* 
	W1025 08:32:08.591017   19107 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:08.592747   19107 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.178804ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002676225s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004042624s
addons_test.go:392: (dbg) Run:  kubectl --context addons-475995 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-475995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-475995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.578799135s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable registry --alsologtostderr -v=1: exit status 11 (285.236104ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:29.322781   21728 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:29.323114   21728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:29.323127   21728 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:29.323134   21728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:29.323381   21728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:29.323722   21728 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:29.324177   21728 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:29.324204   21728 addons.go:606] checking whether the cluster is paused
	I1025 08:32:29.324328   21728 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:29.324352   21728 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:29.324958   21728 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:29.348383   21728 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:29.348452   21728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:29.370416   21728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:29.478982   21728 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:29.479062   21728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:29.516799   21728 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:29.516824   21728 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:29.516830   21728 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:29.516835   21728 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:29.516840   21728 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:29.516845   21728 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:29.516850   21728 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:29.516854   21728 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:29.516858   21728 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:29.516870   21728 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:29.516878   21728 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:29.516882   21728 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:29.516887   21728 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:29.516891   21728 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:29.516895   21728 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:29.516901   21728 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:29.516910   21728 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:29.516916   21728 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:29.516920   21728 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:29.516924   21728 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:29.516928   21728 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:29.516932   21728 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:29.516936   21728 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:29.516950   21728 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:29.516957   21728 cri.go:89] found id: ""
	I1025 08:32:29.517001   21728 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:29.534970   21728 out.go:203] 
	W1025 08:32:29.536649   21728 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:29.536672   21728 out.go:285] * 
	* 
	W1025 08:32:29.541240   21728 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:29.542923   21728 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.12s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.795184ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-475995
addons_test.go:332: (dbg) Run:  kubectl --context addons-475995 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (250.704424ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:32.513421   22251 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:32.513760   22251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:32.513771   22251 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:32.513776   22251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:32.513976   22251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:32.514234   22251 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:32.514557   22251 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:32.514571   22251 addons.go:606] checking whether the cluster is paused
	I1025 08:32:32.514694   22251 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:32.514709   22251 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:32.515106   22251 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:32.534187   22251 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:32.534238   22251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:32.553949   22251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:32.653418   22251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:32.653506   22251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:32.682736   22251 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:32.682766   22251 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:32.682770   22251 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:32.682773   22251 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:32.682776   22251 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:32.682780   22251 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:32.682783   22251 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:32.682786   22251 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:32.682788   22251 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:32.682801   22251 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:32.682806   22251 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:32.682809   22251 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:32.682811   22251 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:32.682814   22251 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:32.682817   22251 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:32.682831   22251 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:32.682836   22251 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:32.682840   22251 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:32.682843   22251 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:32.682845   22251 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:32.682848   22251 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:32.682850   22251 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:32.682853   22251 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:32.682859   22251 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:32.682862   22251 cri.go:89] found id: ""
	I1025 08:32:32.682909   22251 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:32.697452   22251 out.go:203] 
	W1025 08:32:32.698861   22251 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:32.698893   22251 out.go:285] * 
	* 
	W1025 08:32:32.702216   22251 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:32.703851   22251 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-475995 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-475995 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-475995 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ee9770cf-b24a-4c18-a48c-36bc2a8a64e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ee9770cf-b24a-4c18-a48c-36bc2a8a64e6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004409559s
I1025 08:32:35.673558    9473 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.401929438s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-475995 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-475995
helpers_test.go:243: (dbg) docker inspect addons-475995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822",
	        "Created": "2025-10-25T08:29:56.512830024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11458,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:29:56.546472591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/hostname",
	        "HostsPath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/hosts",
	        "LogPath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822-json.log",
	        "Name": "/addons-475995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-475995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-475995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822",
	                "LowerDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-475995",
	                "Source": "/var/lib/docker/volumes/addons-475995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-475995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-475995",
	                "name.minikube.sigs.k8s.io": "addons-475995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb9467cbc7b9f95302d79c8838782d19ddb3e500cfde6d9573a8d192715689e5",
	            "SandboxKey": "/var/run/docker/netns/cb9467cbc7b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-475995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:b2:f8:63:69:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9b1c98f265a8051e0e74890fc7977c69249b8bf87efb30cbeba9f5fa2e7d626c",
	                    "EndpointID": "c8306d5d332d593c0db051f20a6481e8dfc88e1608b1793055dd543e06878553",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-475995",
	                        "231e1e8ad0cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-475995 -n addons-475995
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-475995 logs -n 25: (1.17639184s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-499929 --alsologtostderr --binary-mirror http://127.0.0.1:44063 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-499929 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-499929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-499929 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-475995                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-475995                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-475995 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-475995 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-475995 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ ssh     │ addons-475995 ssh cat /opt/local-path-provisioner/pvc-a0dc3c3f-7548-4bee-b78c-92c7ac072de7_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-475995 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ ip      │ addons-475995 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-475995 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-475995                                                                                                                                                                                                                                                                                                                                                                                           │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-475995 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ ssh     │ addons-475995 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ ip      │ addons-475995 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-475995        │ jenkins │ v1.37.0 │ 25 Oct 25 08:34 UTC │ 25 Oct 25 08:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:32.773146   10795 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:32.773376   10795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:32.773385   10795 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:32.773389   10795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:32.773610   10795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:29:32.774124   10795 out.go:368] Setting JSON to false
	I1025 08:29:32.774947   10795 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":721,"bootTime":1761380252,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:32.775029   10795 start.go:141] virtualization: kvm guest
	I1025 08:29:32.777170   10795 out.go:179] * [addons-475995] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:29:32.778756   10795 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:29:32.778754   10795 notify.go:220] Checking for updates...
	I1025 08:29:32.780083   10795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:32.781413   10795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:29:32.782658   10795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:29:32.783778   10795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:29:32.784906   10795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:29:32.786253   10795 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:32.810544   10795 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:29:32.810609   10795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:32.868386   10795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 08:29:32.856916468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:32.868537   10795 docker.go:318] overlay module found
	I1025 08:29:32.870316   10795 out.go:179] * Using the docker driver based on user configuration
	I1025 08:29:32.871566   10795 start.go:305] selected driver: docker
	I1025 08:29:32.871584   10795 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:32.871599   10795 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:29:32.872298   10795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:32.929342   10795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 08:29:32.919413351 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:32.929489   10795 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:32.929712   10795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:29:32.931524   10795 out.go:179] * Using Docker driver with root privileges
	I1025 08:29:32.932878   10795 cni.go:84] Creating CNI manager for ""
	I1025 08:29:32.932939   10795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:29:32.932949   10795 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:32.933000   10795 start.go:349] cluster config:
	{Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 08:29:32.934252   10795 out.go:179] * Starting "addons-475995" primary control-plane node in "addons-475995" cluster
	I1025 08:29:32.935399   10795 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:29:32.936631   10795 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:29:32.937765   10795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:32.937790   10795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:29:32.937809   10795 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:29:32.937818   10795 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:32.937909   10795 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 08:29:32.937923   10795 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:29:32.938260   10795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/config.json ...
	I1025 08:29:32.938286   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/config.json: {Name:mkfeb9e3f581fb26b967f776256af36385607ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:29:32.953807   10795 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:29:32.953900   10795 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:29:32.953915   10795 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:29:32.953920   10795 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:29:32.953929   10795 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:29:32.953936   10795 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 08:29:45.086495   10795 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 08:29:45.086537   10795 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:29:45.086576   10795 start.go:360] acquireMachinesLock for addons-475995: {Name:mk790996f547979aa305fcb4f65a603a5e244882 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:29:45.086690   10795 start.go:364] duration metric: took 94.93µs to acquireMachinesLock for "addons-475995"
	I1025 08:29:45.086714   10795 start.go:93] Provisioning new machine with config: &{Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:29:45.086776   10795 start.go:125] createHost starting for "" (driver="docker")
	I1025 08:29:45.088548   10795 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 08:29:45.088781   10795 start.go:159] libmachine.API.Create for "addons-475995" (driver="docker")
	I1025 08:29:45.088809   10795 client.go:168] LocalClient.Create starting
	I1025 08:29:45.088897   10795 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 08:29:45.239559   10795 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 08:29:45.369655   10795 cli_runner.go:164] Run: docker network inspect addons-475995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 08:29:45.386250   10795 cli_runner.go:211] docker network inspect addons-475995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 08:29:45.386308   10795 network_create.go:284] running [docker network inspect addons-475995] to gather additional debugging logs...
	I1025 08:29:45.386324   10795 cli_runner.go:164] Run: docker network inspect addons-475995
	W1025 08:29:45.401564   10795 cli_runner.go:211] docker network inspect addons-475995 returned with exit code 1
	I1025 08:29:45.401589   10795 network_create.go:287] error running [docker network inspect addons-475995]: docker network inspect addons-475995: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-475995 not found
	I1025 08:29:45.401600   10795 network_create.go:289] output of [docker network inspect addons-475995]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-475995 not found
	
	** /stderr **
	I1025 08:29:45.401700   10795 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:29:45.417980   10795 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c3d3a0}
	I1025 08:29:45.418033   10795 network_create.go:124] attempt to create docker network addons-475995 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 08:29:45.418072   10795 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-475995 addons-475995
	I1025 08:29:45.470683   10795 network_create.go:108] docker network addons-475995 192.168.49.0/24 created
	I1025 08:29:45.470712   10795 kic.go:121] calculated static IP "192.168.49.2" for the "addons-475995" container
	I1025 08:29:45.470776   10795 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 08:29:45.485995   10795 cli_runner.go:164] Run: docker volume create addons-475995 --label name.minikube.sigs.k8s.io=addons-475995 --label created_by.minikube.sigs.k8s.io=true
	I1025 08:29:45.502368   10795 oci.go:103] Successfully created a docker volume addons-475995
	I1025 08:29:45.502448   10795 cli_runner.go:164] Run: docker run --rm --name addons-475995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-475995 --entrypoint /usr/bin/test -v addons-475995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 08:29:52.128582   10795 cli_runner.go:217] Completed: docker run --rm --name addons-475995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-475995 --entrypoint /usr/bin/test -v addons-475995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.626092431s)
	I1025 08:29:52.128607   10795 oci.go:107] Successfully prepared a docker volume addons-475995
	I1025 08:29:52.128623   10795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:52.128654   10795 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 08:29:52.128722   10795 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-475995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 08:29:56.439151   10795 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-475995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.310380328s)
	I1025 08:29:56.439183   10795 kic.go:203] duration metric: took 4.310525152s to extract preloaded images to volume ...
	W1025 08:29:56.439284   10795 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 08:29:56.439324   10795 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 08:29:56.439365   10795 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 08:29:56.497582   10795 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-475995 --name addons-475995 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-475995 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-475995 --network addons-475995 --ip 192.168.49.2 --volume addons-475995:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 08:29:56.780495   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Running}}
	I1025 08:29:56.799591   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:29:56.818491   10795 cli_runner.go:164] Run: docker exec addons-475995 stat /var/lib/dpkg/alternatives/iptables
	I1025 08:29:56.863803   10795 oci.go:144] the created container "addons-475995" has a running status.
	I1025 08:29:56.863836   10795 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa...
	I1025 08:29:57.038968   10795 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 08:29:57.070162   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:29:57.092168   10795 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 08:29:57.092184   10795 kic_runner.go:114] Args: [docker exec --privileged addons-475995 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 08:29:57.138854   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:29:57.159854   10795 machine.go:93] provisionDockerMachine start ...
	I1025 08:29:57.159970   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:57.179192   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.179485   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:57.179498   10795 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:29:57.319606   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-475995
	
	I1025 08:29:57.319636   10795 ubuntu.go:182] provisioning hostname "addons-475995"
	I1025 08:29:57.319704   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:57.337574   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.337866   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:57.337887   10795 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-475995 && echo "addons-475995" | sudo tee /etc/hostname
	I1025 08:29:57.486381   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-475995
	
	I1025 08:29:57.486475   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:57.504393   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.504940   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:57.504975   10795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-475995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-475995/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-475995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:29:57.643988   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:29:57.644018   10795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 08:29:57.644041   10795 ubuntu.go:190] setting up certificates
	I1025 08:29:57.644053   10795 provision.go:84] configureAuth start
	I1025 08:29:57.644104   10795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-475995
	I1025 08:29:57.660617   10795 provision.go:143] copyHostCerts
	I1025 08:29:57.660707   10795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 08:29:57.660840   10795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 08:29:57.660927   10795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 08:29:57.660999   10795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.addons-475995 san=[127.0.0.1 192.168.49.2 addons-475995 localhost minikube]
	I1025 08:29:58.214345   10795 provision.go:177] copyRemoteCerts
	I1025 08:29:58.214398   10795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:29:58.214448   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.231580   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.329330   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:29:58.347036   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:29:58.362733   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 08:29:58.378269   10795 provision.go:87] duration metric: took 734.204044ms to configureAuth
	I1025 08:29:58.378297   10795 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:29:58.378465   10795 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:29:58.378574   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.395015   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:58.395257   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:58.395282   10795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:29:58.634918   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:29:58.634942   10795 machine.go:96] duration metric: took 1.475060839s to provisionDockerMachine
	I1025 08:29:58.634954   10795 client.go:171] duration metric: took 13.546136728s to LocalClient.Create
	I1025 08:29:58.634976   10795 start.go:167] duration metric: took 13.546194737s to libmachine.API.Create "addons-475995"
	I1025 08:29:58.634985   10795 start.go:293] postStartSetup for "addons-475995" (driver="docker")
	I1025 08:29:58.634996   10795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:29:58.635065   10795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:29:58.635114   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.652101   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.751134   10795 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:29:58.754554   10795 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:29:58.754594   10795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:29:58.754606   10795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 08:29:58.754692   10795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 08:29:58.754726   10795 start.go:296] duration metric: took 119.734756ms for postStartSetup
	I1025 08:29:58.754989   10795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-475995
	I1025 08:29:58.772005   10795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/config.json ...
	I1025 08:29:58.772282   10795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:29:58.772329   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.789692   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.884492   10795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:29:58.888737   10795 start.go:128] duration metric: took 13.801947216s to createHost
	I1025 08:29:58.888758   10795 start.go:83] releasing machines lock for "addons-475995", held for 13.802055674s
	I1025 08:29:58.888807   10795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-475995
	I1025 08:29:58.905111   10795 ssh_runner.go:195] Run: cat /version.json
	I1025 08:29:58.905151   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.905198   10795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:29:58.905258   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.924458   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.924846   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:59.072759   10795 ssh_runner.go:195] Run: systemctl --version
	I1025 08:29:59.079039   10795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:29:59.111276   10795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:29:59.115572   10795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:29:59.115621   10795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:29:59.139451   10795 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 08:29:59.139476   10795 start.go:495] detecting cgroup driver to use...
	I1025 08:29:59.139501   10795 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 08:29:59.139550   10795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:29:59.154160   10795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:29:59.165287   10795 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:29:59.165349   10795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:29:59.180352   10795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:29:59.196023   10795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:29:59.274471   10795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:29:59.356885   10795 docker.go:234] disabling docker service ...
	I1025 08:29:59.356947   10795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:29:59.374404   10795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:29:59.386183   10795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:29:59.468690   10795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:29:59.547055   10795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:29:59.558873   10795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:29:59.571998   10795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:29:59.572060   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.581462   10795 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 08:29:59.581519   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.589714   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.597910   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.606192   10795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:29:59.613687   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.621977   10795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.634359   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.642291   10795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:29:59.649023   10795 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 08:29:59.649077   10795 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 08:29:59.660206   10795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:29:59.667133   10795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:29:59.741934   10795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:29:59.840274   10795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:29:59.840344   10795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:29:59.844055   10795 start.go:563] Will wait 60s for crictl version
	I1025 08:29:59.844119   10795 ssh_runner.go:195] Run: which crictl
	I1025 08:29:59.847327   10795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:29:59.870721   10795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:29:59.870819   10795 ssh_runner.go:195] Run: crio --version
	I1025 08:29:59.896525   10795 ssh_runner.go:195] Run: crio --version
	I1025 08:29:59.924605   10795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:29:59.925921   10795 cli_runner.go:164] Run: docker network inspect addons-475995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:29:59.942397   10795 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:29:59.946280   10795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:29:59.955984   10795 kubeadm.go:883] updating cluster {Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:29:59.956101   10795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:59.956146   10795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:29:59.985192   10795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:29:59.985209   10795 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:29:59.985253   10795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:00.009056   10795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:00.009077   10795 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:30:00.009084   10795 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 08:30:00.009163   10795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-475995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:30:00.009218   10795 ssh_runner.go:195] Run: crio config
	I1025 08:30:00.050940   10795 cni.go:84] Creating CNI manager for ""
	I1025 08:30:00.050965   10795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:00.050989   10795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:30:00.051019   10795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-475995 NodeName:addons-475995 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:30:00.051173   10795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-475995"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:30:00.051246   10795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:30:00.059196   10795 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:30:00.059256   10795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:30:00.066481   10795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 08:30:00.078044   10795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:30:00.091945   10795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 08:30:00.103152   10795 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:30:00.106308   10795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:00.115358   10795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:00.192219   10795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:00.218714   10795 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995 for IP: 192.168.49.2
	I1025 08:30:00.218734   10795 certs.go:195] generating shared ca certs ...
	I1025 08:30:00.218748   10795 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.218885   10795 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 08:30:00.435116   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt ...
	I1025 08:30:00.435147   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt: {Name:mkcb9fce405d7437ce47d5dbf66cddac56bf3772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.435338   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key ...
	I1025 08:30:00.435357   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key: {Name:mk921cbceda1cabf580f4626210826663b159287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.435471   10795 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 08:30:00.785303   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt ...
	I1025 08:30:00.785333   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt: {Name:mk5a1bfd48d2578a0ad435965ac442fbc17cdb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.785527   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key ...
	I1025 08:30:00.785545   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key: {Name:mkbfa3033bd1239fa1892508d295e32f295ca57b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.785657   10795 certs.go:257] generating profile certs ...
	I1025 08:30:00.785734   10795 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.key
	I1025 08:30:00.785755   10795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt with IP's: []
	I1025 08:30:01.162628   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt ...
	I1025 08:30:01.162666   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: {Name:mkb84c23f8d49a5a2b7fb68a257fbe3748a01896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.162841   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.key ...
	I1025 08:30:01.162852   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.key: {Name:mk0a0047f2d7e7599a1f88676c3b8af147a29cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.162919   10795 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d
	I1025 08:30:01.162937   10795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 08:30:01.332193   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d ...
	I1025 08:30:01.332221   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d: {Name:mk1282ef513e2075440591dae83dae6157fefdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.332376   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d ...
	I1025 08:30:01.332389   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d: {Name:mkb50ef1bbc2e488e5ad3862947c4eb0d936e180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.332470   10795 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt
	I1025 08:30:01.332562   10795 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key
	I1025 08:30:01.332618   10795 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key
	I1025 08:30:01.332636   10795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt with IP's: []
	I1025 08:30:01.444960   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt ...
	I1025 08:30:01.444988   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt: {Name:mk345660ad2cca55310cfaa84ac51e8d8f94bef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.445138   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key ...
	I1025 08:30:01.445148   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key: {Name:mkabd8770e827fc65dc5a90a8ac98e79d7dd057d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.445325   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 08:30:01.445358   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 08:30:01.445380   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:30:01.445405   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 08:30:01.446006   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:30:01.463167   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:30:01.479237   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:30:01.495099   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 08:30:01.511328   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:30:01.527209   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 08:30:01.543028   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:30:01.558850   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:30:01.574770   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:30:01.592892   10795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:30:01.604439   10795 ssh_runner.go:195] Run: openssl version
	I1025 08:30:01.610131   10795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:30:01.619928   10795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:01.623252   10795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:01.623290   10795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:01.656851   10795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:30:01.665067   10795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:30:01.668442   10795 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:30:01.668498   10795 kubeadm.go:400] StartCluster: {Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:30:01.668571   10795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:30:01.668609   10795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:30:01.694312   10795 cri.go:89] found id: ""
	I1025 08:30:01.694379   10795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:30:01.701936   10795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:30:01.709196   10795 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 08:30:01.709254   10795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:30:01.716400   10795 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:30:01.716414   10795 kubeadm.go:157] found existing configuration files:
	
	I1025 08:30:01.716460   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:30:01.723445   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:30:01.723499   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:30:01.730284   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:30:01.737196   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:30:01.737237   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:30:01.743949   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:30:01.750721   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:30:01.750776   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:30:01.757585   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:30:01.764920   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:30:01.764964   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:30:01.772255   10795 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 08:30:01.807965   10795 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:30:01.808021   10795 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:30:01.827239   10795 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 08:30:01.827336   10795 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 08:30:01.827418   10795 kubeadm.go:318] OS: Linux
	I1025 08:30:01.827506   10795 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 08:30:01.827596   10795 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 08:30:01.827686   10795 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 08:30:01.827759   10795 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 08:30:01.827838   10795 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 08:30:01.827915   10795 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 08:30:01.827996   10795 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 08:30:01.828059   10795 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 08:30:01.880245   10795 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:30:01.880422   10795 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:30:01.880562   10795 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:30:01.888303   10795 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:30:01.890315   10795 out.go:252]   - Generating certificates and keys ...
	I1025 08:30:01.890428   10795 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:30:01.890527   10795 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:30:02.134040   10795 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:30:02.312321   10795 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:30:02.609527   10795 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:30:03.118501   10795 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:30:03.161005   10795 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:30:03.161216   10795 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-475995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:03.575306   10795 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:30:03.575450   10795 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-475995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:04.280791   10795 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:30:04.414694   10795 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:30:04.679354   10795 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:30:04.679416   10795 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:30:05.233758   10795 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:30:05.541937   10795 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:30:05.792508   10795 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:30:06.203342   10795 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:30:06.549912   10795 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:30:06.550406   10795 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:30:06.554084   10795 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:30:06.555867   10795 out.go:252]   - Booting up control plane ...
	I1025 08:30:06.556022   10795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:30:06.556119   10795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:30:06.556181   10795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:30:06.569270   10795 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:30:06.569418   10795 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:30:06.575595   10795 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:30:06.575868   10795 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:30:06.575919   10795 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:30:06.673682   10795 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:30:06.673836   10795 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:30:07.674633   10795 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001140394s
	I1025 08:30:07.678179   10795 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:30:07.678348   10795 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 08:30:07.678494   10795 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:30:07.678616   10795 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:30:08.773235   10795 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.094959701s
	I1025 08:30:09.798947   10795 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.120645761s
	I1025 08:30:11.679810   10795 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00152719s
	I1025 08:30:11.690243   10795 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:30:11.699778   10795 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:30:11.708569   10795 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:30:11.708888   10795 kubeadm.go:318] [mark-control-plane] Marking the node addons-475995 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:30:11.716051   10795 kubeadm.go:318] [bootstrap-token] Using token: nbs337.bo63fhl08q3plpyx
	I1025 08:30:11.717485   10795 out.go:252]   - Configuring RBAC rules ...
	I1025 08:30:11.717605   10795 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:30:11.721130   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:30:11.725836   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:30:11.728022   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:30:11.730293   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:30:11.733147   10795 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 08:30:12.085043   10795 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 08:30:12.498518   10795 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 08:30:13.085051   10795 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 08:30:13.086102   10795 kubeadm.go:318] 
	I1025 08:30:13.086191   10795 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 08:30:13.086202   10795 kubeadm.go:318] 
	I1025 08:30:13.086315   10795 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 08:30:13.086323   10795 kubeadm.go:318] 
	I1025 08:30:13.086355   10795 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 08:30:13.086456   10795 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 08:30:13.086556   10795 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 08:30:13.086576   10795 kubeadm.go:318] 
	I1025 08:30:13.086675   10795 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 08:30:13.086686   10795 kubeadm.go:318] 
	I1025 08:30:13.086760   10795 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 08:30:13.086768   10795 kubeadm.go:318] 
	I1025 08:30:13.086844   10795 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 08:30:13.086949   10795 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 08:30:13.087065   10795 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 08:30:13.087082   10795 kubeadm.go:318] 
	I1025 08:30:13.087187   10795 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 08:30:13.087293   10795 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 08:30:13.087303   10795 kubeadm.go:318] 
	I1025 08:30:13.087430   10795 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token nbs337.bo63fhl08q3plpyx \
	I1025 08:30:13.087573   10795 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 08:30:13.087610   10795 kubeadm.go:318] 	--control-plane 
	I1025 08:30:13.087616   10795 kubeadm.go:318] 
	I1025 08:30:13.087752   10795 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 08:30:13.087763   10795 kubeadm.go:318] 
	I1025 08:30:13.087872   10795 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token nbs337.bo63fhl08q3plpyx \
	I1025 08:30:13.088025   10795 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 08:30:13.089453   10795 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 08:30:13.089616   10795 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 08:30:13.089662   10795 cni.go:84] Creating CNI manager for ""
	I1025 08:30:13.089675   10795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:13.091372   10795 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 08:30:13.092433   10795 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 08:30:13.096532   10795 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 08:30:13.096548   10795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 08:30:13.108772   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 08:30:13.323724   10795 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:30:13.323770   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:13.323891   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-475995 minikube.k8s.io/updated_at=2025_10_25T08_30_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=addons-475995 minikube.k8s.io/primary=true
	I1025 08:30:13.393513   10795 ops.go:34] apiserver oom_adj: -16
	I1025 08:30:13.393607   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:13.894389   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:14.394782   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:14.894570   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:15.393873   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:15.893906   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:16.393891   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:16.893724   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:17.393958   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:17.893768   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:17.956002   10795 kubeadm.go:1113] duration metric: took 4.632279358s to wait for elevateKubeSystemPrivileges
	I1025 08:30:17.956038   10795 kubeadm.go:402] duration metric: took 16.287543339s to StartCluster
	I1025 08:30:17.956061   10795 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:17.956168   10795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:30:17.956535   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:17.956741   10795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 08:30:17.956780   10795 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:17.956825   10795 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 08:30:17.956975   10795 addons.go:69] Setting yakd=true in profile "addons-475995"
	I1025 08:30:17.956992   10795 addons.go:69] Setting default-storageclass=true in profile "addons-475995"
	I1025 08:30:17.957035   10795 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:17.957081   10795 addons.go:238] Setting addon yakd=true in "addons-475995"
	I1025 08:30:17.957094   10795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-475995"
	I1025 08:30:17.957112   10795 addons.go:69] Setting registry-creds=true in profile "addons-475995"
	I1025 08:30:17.957098   10795 addons.go:69] Setting gcp-auth=true in profile "addons-475995"
	I1025 08:30:17.957109   10795 addons.go:69] Setting ingress=true in profile "addons-475995"
	I1025 08:30:17.957105   10795 addons.go:69] Setting ingress-dns=true in profile "addons-475995"
	I1025 08:30:17.957139   10795 addons.go:238] Setting addon registry-creds=true in "addons-475995"
	I1025 08:30:17.957147   10795 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-475995"
	I1025 08:30:17.957164   10795 mustload.go:65] Loading cluster: addons-475995
	I1025 08:30:17.957170   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957172   10795 addons.go:69] Setting volcano=true in profile "addons-475995"
	I1025 08:30:17.957176   10795 addons.go:238] Setting addon ingress-dns=true in "addons-475995"
	I1025 08:30:17.957180   10795 addons.go:69] Setting metrics-server=true in profile "addons-475995"
	I1025 08:30:17.957196   10795 addons.go:238] Setting addon metrics-server=true in "addons-475995"
	I1025 08:30:17.957200   10795 addons.go:238] Setting addon volcano=true in "addons-475995"
	I1025 08:30:17.957217   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957221   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957238   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957124   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957345   10795 addons.go:238] Setting addon ingress=true in "addons-475995"
	I1025 08:30:17.957401   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957415   10795 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:17.957533   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957698   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957713   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957765   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957770   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957787   10795 addons.go:69] Setting inspektor-gadget=true in profile "addons-475995"
	I1025 08:30:17.957800   10795 addons.go:238] Setting addon inspektor-gadget=true in "addons-475995"
	I1025 08:30:17.957821   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.958085   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.958259   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.958916   10795 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-475995"
	I1025 08:30:17.958940   10795 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-475995"
	I1025 08:30:17.959182   10795 addons.go:69] Setting volumesnapshots=true in profile "addons-475995"
	I1025 08:30:17.959196   10795 addons.go:238] Setting addon volumesnapshots=true in "addons-475995"
	I1025 08:30:17.959220   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.959710   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.959965   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.960009   10795 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-475995"
	I1025 08:30:17.960073   10795 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-475995"
	I1025 08:30:17.960092   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.960517   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957164   10795 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-475995"
	I1025 08:30:17.960770   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.960796   10795 out.go:179] * Verifying Kubernetes components...
	I1025 08:30:17.960989   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957768   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957138   10795 addons.go:69] Setting storage-provisioner=true in profile "addons-475995"
	I1025 08:30:17.961431   10795 addons.go:238] Setting addon storage-provisioner=true in "addons-475995"
	I1025 08:30:17.961467   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.961972   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.959980   10795 addons.go:69] Setting cloud-spanner=true in profile "addons-475995"
	I1025 08:30:17.963790   10795 addons.go:238] Setting addon cloud-spanner=true in "addons-475995"
	I1025 08:30:17.963831   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.959987   10795 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-475995"
	I1025 08:30:17.963911   10795 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-475995"
	I1025 08:30:17.963938   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.964304   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.964428   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.960002   10795 addons.go:69] Setting registry=true in profile "addons-475995"
	I1025 08:30:17.964540   10795 addons.go:238] Setting addon registry=true in "addons-475995"
	I1025 08:30:17.964568   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.965058   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.965778   10795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:17.972083   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:18.027489   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:18.033914   10795 addons.go:238] Setting addon default-storageclass=true in "addons-475995"
	I1025 08:30:18.033957   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:18.034411   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	W1025 08:30:18.034635   10795 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 08:30:18.039156   10795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:30:18.039324   10795 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 08:30:18.040438   10795 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 08:30:18.041520   10795 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:18.041534   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:30:18.041584   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.041896   10795 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:18.041920   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 08:30:18.041931   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 08:30:18.041953   10795 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 08:30:18.041978   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.041999   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.054080   10795 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 08:30:18.058405   10795 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:18.058439   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 08:30:18.058514   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.066464   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 08:30:18.068705   10795 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 08:30:18.069442   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 08:30:18.069468   10795 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 08:30:18.069532   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.070878   10795 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:18.070902   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 08:30:18.070966   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.078878   10795 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-475995"
	I1025 08:30:18.078925   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:18.079389   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:18.081630   10795 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 08:30:18.081944   10795 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 08:30:18.081773   10795 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 08:30:18.083255   10795 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:18.083294   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 08:30:18.083260   10795 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 08:30:18.083363   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.083388   10795 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 08:30:18.083772   10795 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 08:30:18.083862   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.083920   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 08:30:18.083982   10795 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 08:30:18.084032   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.086855   10795 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 08:30:18.087958   10795 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 08:30:18.088019   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 08:30:18.088099   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.096633   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 08:30:18.097010   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:18.099198   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 08:30:18.100277   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:18.100436   10795 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1025 08:30:18.106647   10795 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:18.106669   10795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:30:18.106723   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.107336   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 08:30:18.107534   10795 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:18.107552   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 08:30:18.107596   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.107783   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.108490   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 08:30:18.109465   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 08:30:18.109727   10795 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:18.109743   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 08:30:18.109788   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.113703   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.113710   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.115169   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 08:30:18.119693   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 08:30:18.120968   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 08:30:18.122071   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 08:30:18.124194   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.124442   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 08:30:18.124458   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 08:30:18.124530   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.129841   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.133652   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.147448   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.149192   10795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 08:30:18.163267   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.167623   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.167631   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.169273   10795 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 08:30:18.170057   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.171358   10795 out.go:179]   - Using image docker.io/busybox:stable
	I1025 08:30:18.172412   10795 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:18.172435   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 08:30:18.172497   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.173792   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.178321   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	W1025 08:30:18.179211   10795 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 08:30:18.179234   10795 retry.go:31] will retry after 350.881426ms: ssh: handshake failed: EOF
	W1025 08:30:18.179318   10795 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 08:30:18.179326   10795 retry.go:31] will retry after 149.768313ms: ssh: handshake failed: EOF
	I1025 08:30:18.198561   10795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:18.202724   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	W1025 08:30:18.204146   10795 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 08:30:18.204171   10795 retry.go:31] will retry after 136.609188ms: ssh: handshake failed: EOF
	I1025 08:30:18.212610   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.300254   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:18.325147   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:18.327928   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 08:30:18.327952   10795 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 08:30:18.334884   10795 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 08:30:18.334914   10795 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 08:30:18.341983   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:18.347072   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:18.353241   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:18.377940   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:18.378464   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 08:30:18.378489   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 08:30:18.382770   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 08:30:18.382800   10795 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 08:30:18.392057   10795 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:18.392076   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 08:30:18.392370   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:18.394469   10795 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 08:30:18.394533   10795 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 08:30:18.394870   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:18.408141   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 08:30:18.408163   10795 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 08:30:18.422706   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 08:30:18.422728   10795 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 08:30:18.430409   10795 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 08:30:18.430544   10795 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 08:30:18.460620   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:18.460673   10795 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 08:30:18.461783   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:18.487811   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 08:30:18.487833   10795 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 08:30:18.494207   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:18.494228   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 08:30:18.501280   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:18.525299   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 08:30:18.525400   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 08:30:18.525562   10795 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 08:30:18.526817   10795 node_ready.go:35] waiting up to 6m0s for node "addons-475995" to be "Ready" ...
	I1025 08:30:18.543078   10795 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:18.543154   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 08:30:18.559182   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 08:30:18.559309   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 08:30:18.570139   10795 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 08:30:18.570162   10795 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 08:30:18.573658   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:18.617322   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 08:30:18.617351   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 08:30:18.620962   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:18.634297   10795 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:18.634382   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 08:30:18.665596   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 08:30:18.665627   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 08:30:18.679930   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:18.714875   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 08:30:18.714983   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 08:30:18.726162   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:18.774617   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 08:30:18.774656   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 08:30:18.821676   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 08:30:18.821769   10795 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 08:30:18.864840   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 08:30:18.865720   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 08:30:18.902470   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 08:30:18.902490   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 08:30:18.945843   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:18.945873   10795 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 08:30:18.975581   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:19.032412   10795 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-475995" context rescaled to 1 replicas
	W1025 08:30:19.259874   10795 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 08:30:19.327944   10795 addons.go:479] Verifying addon metrics-server=true in "addons-475995"
	W1025 08:30:19.328285   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:19.328409   10795 retry.go:31] will retry after 133.361273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:19.345795   10795 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-475995 service yakd-dashboard -n yakd-dashboard
	
	I1025 08:30:19.462725   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:19.940325   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.319281306s)
	I1025 08:30:19.940363   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.260323665s)
	W1025 08:30:19.940382   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:30:19.940397   10795 addons.go:479] Verifying addon registry=true in "addons-475995"
	I1025 08:30:19.940413   10795 retry.go:31] will retry after 335.913198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:30:19.940499   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.21424676s)
	I1025 08:30:19.940525   10795 addons.go:479] Verifying addon ingress=true in "addons-475995"
	I1025 08:30:19.940854   10795 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-475995"
	I1025 08:30:19.941951   10795 out.go:179] * Verifying ingress addon...
	I1025 08:30:19.941959   10795 out.go:179] * Verifying registry addon...
	I1025 08:30:19.941999   10795 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 08:30:19.945044   10795 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 08:30:19.945055   10795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 08:30:19.945196   10795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 08:30:19.948983   10795 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:30:19.949004   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:19.949056   10795 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:30:19.949069   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:19.949262   10795 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 08:30:19.949273   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:20.136246   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:20.136279   10795 retry.go:31] will retry after 297.996815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:20.276806   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:20.434508   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:20.448226   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:20.448300   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:20.448452   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:20.529370   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:20.948481   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:20.948599   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:20.948627   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:21.448461   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:21.448673   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:21.448723   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:21.948320   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:21.948338   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:21.948406   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:22.447672   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:22.447744   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:22.447814   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:22.530024   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:22.758009   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.481141905s)
	I1025 08:30:22.758056   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.323506837s)
	W1025 08:30:22.758091   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:22.758110   10795 retry.go:31] will retry after 572.443818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:22.948794   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:22.948813   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:22.948940   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:23.331042   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:23.447782   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:23.448003   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:23.448038   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:23.870086   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:23.870113   10795 retry.go:31] will retry after 1.201376868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:23.948127   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:23.948146   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:23.948197   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:24.448349   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:24.448426   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:24.448454   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:24.948561   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:24.948574   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:24.948726   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:25.030094   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:25.071610   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:25.448359   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:25.448576   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:25.448601   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:25.612448   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:25.612484   10795 retry.go:31] will retry after 1.715566176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:25.642547   10795 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 08:30:25.642609   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:25.660623   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:25.766429   10795 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 08:30:25.779599   10795 addons.go:238] Setting addon gcp-auth=true in "addons-475995"
	I1025 08:30:25.779669   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:25.780028   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:25.797680   10795 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 08:30:25.797732   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:25.815536   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:25.913694   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:25.915060   10795 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 08:30:25.916183   10795 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 08:30:25.916199   10795 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 08:30:25.930032   10795 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 08:30:25.930055   10795 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 08:30:25.942955   10795 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:30:25.942975   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 08:30:25.949025   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:25.949029   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:25.949062   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:25.955785   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:30:26.255192   10795 addons.go:479] Verifying addon gcp-auth=true in "addons-475995"
	I1025 08:30:26.256777   10795 out.go:179] * Verifying gcp-auth addon...
	I1025 08:30:26.258456   10795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 08:30:26.260617   10795 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 08:30:26.260634   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:26.448559   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:26.448598   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:26.448766   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:26.761861   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:26.948697   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:26.948825   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:26.948835   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:27.262154   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:27.328209   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:27.448767   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:27.448816   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:27.448846   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:27.530366   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:27.761954   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:30:27.854344   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:27.854373   10795 retry.go:31] will retry after 1.553262038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:27.948354   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:27.948451   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:27.948476   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:28.261176   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:28.448075   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:28.448178   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:28.448226   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:28.761989   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:28.948654   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:28.948779   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:28.948829   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:29.261821   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:29.408038   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:29.448547   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:29.448692   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:29.448707   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:29.760829   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:30:29.941684   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:29.941716   10795 retry.go:31] will retry after 2.068473842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:29.948189   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:29.948302   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:29.948444   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:30.030065   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:30.260869   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:30.448964   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:30.448979   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:30.449096   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:30.761933   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:30.948668   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:30.948791   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:30.948940   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:31.261352   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:31.448458   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:31.448509   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:31.448513   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:31.761697   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:31.948161   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:31.948275   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:31.948436   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:32.010496   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:32.261075   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:32.448216   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:32.448332   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:32.448387   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:32.529958   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	W1025 08:30:32.553149   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:32.553185   10795 retry.go:31] will retry after 5.18951034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:32.762318   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:32.948541   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:32.948554   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:32.948687   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:33.261584   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:33.448801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:33.448801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:33.448993   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:33.761039   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:33.947672   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:33.947754   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:33.947914   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:34.261767   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:34.448512   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:34.448510   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:34.448580   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:34.530036   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:34.761790   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:34.948503   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:34.948709   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:34.948731   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:35.261916   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:35.449016   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:35.449161   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:35.449296   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:35.761824   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:35.948463   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:35.948494   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:35.948527   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:36.261126   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:36.448119   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:36.448134   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:36.448118   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.761585   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:36.950378   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.950420   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:36.950485   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:37.029849   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:37.261437   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:37.448567   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:37.448595   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:37.448750   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:37.743546   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:37.761527   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:37.948697   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:37.948711   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:37.948790   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:38.261759   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:30:38.275914   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:38.275949   10795 retry.go:31] will retry after 3.925212953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:38.448746   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:38.448736   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.448834   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:38.761801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:38.948051   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.948186   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:38.948311   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:39.261399   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:39.448268   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:39.448321   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:39.448369   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:39.529746   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:39.761377   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:39.948006   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:39.948050   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:39.948179   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:40.261333   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:40.447879   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:40.447893   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:40.447893   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:40.762378   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:40.947987   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:40.948036   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:40.948171   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:41.260782   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:41.448803   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:41.448826   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:41.448899   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:41.761994   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:41.947476   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:41.947559   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:41.947560   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:42.029950   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:42.202176   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:42.261175   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:42.447767   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:42.447788   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:42.447779   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:42.748574   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:42.748604   10795 retry.go:31] will retry after 13.216673318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:42.761750   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:42.948720   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:42.948792   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:42.948791   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:43.261723   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.448531   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:43.448685   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:43.448726   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:43.761777   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.948494   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:43.948597   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:43.948615   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:44.030011   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:44.261732   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:44.448443   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.448487   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:44.448541   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:44.761803   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:44.948666   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.948666   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:44.948865   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:45.261996   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:45.448420   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:45.448528   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:45.448570   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:45.761629   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:45.948280   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:45.948420   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:45.948481   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:46.261304   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.447817   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:46.447825   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:46.448005   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:46.529428   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:46.760990   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.948347   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:46.948365   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:46.948522   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:47.261712   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:47.448334   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:47.448334   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:47.448368   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.761894   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:47.947525   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.947576   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:47.947663   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:48.261794   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.448534   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:48.448601   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:48.448730   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:48.530059   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:48.761831   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.948429   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:48.948444   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:48.948610   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.261302   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:49.447885   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:49.447918   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:49.448138   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.761731   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:49.948215   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:49.948466   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.948474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.261761   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.448628   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:50.448635   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.448664   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:50.530233   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:50.762364   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.948411   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:50.948411   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.948516   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.261529   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:51.448452   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.448461   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:51.448557   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:51.760942   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:51.947851   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.947873   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:51.947880   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.261654   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:52.448474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:52.448526   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.448554   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:52.761706   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:52.949044   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.949080   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:52.949243   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:53.029987   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:53.261705   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:53.448600   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:53.448621   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:53.448740   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:53.762070   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:53.947696   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:53.947797   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:53.947956   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:54.261811   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:54.448490   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:54.448614   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:54.448732   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:54.761949   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:54.948801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:54.948879   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:54.948992   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:55.030209   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:55.261926   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.448548   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:55.448700   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:55.448755   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:55.760992   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.948667   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:55.948853   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:55.948963   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:55.965899   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.261932   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:56.448564   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:56.448573   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.448605   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:56.497037   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:56.497070   10795 retry.go:31] will retry after 15.552811184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:56.761706   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:56.948298   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:56.948307   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.948534   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.261962   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:57.448694   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:57.448850   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:57.448856   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:57.529964   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:57.761689   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:57.948395   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:57.948539   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.948566   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:58.261974   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:58.448795   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:58.448894   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:58.448897   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:58.762007   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:58.949330   10795 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:30:58.949358   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:58.949533   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:58.949553   10795 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:30:58.949568   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:59.029776   10795 node_ready.go:49] node "addons-475995" is "Ready"
	I1025 08:30:59.029817   10795 node_ready.go:38] duration metric: took 40.502972606s for node "addons-475995" to be "Ready" ...
	I1025 08:30:59.029834   10795 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:30:59.029893   10795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:30:59.046537   10795 api_server.go:72] duration metric: took 41.089726877s to wait for apiserver process to appear ...
	I1025 08:30:59.046567   10795 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:30:59.046592   10795 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 08:30:59.051531   10795 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 08:30:59.052504   10795 api_server.go:141] control plane version: v1.34.1
	I1025 08:30:59.052529   10795 api_server.go:131] duration metric: took 5.955457ms to wait for apiserver health ...
	I1025 08:30:59.052537   10795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:30:59.055633   10795 system_pods.go:59] 20 kube-system pods found
	I1025 08:30:59.055683   10795 system_pods.go:61] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending
	I1025 08:30:59.055693   10795 system_pods.go:61] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:30:59.055700   10795 system_pods.go:61] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.055707   10795 system_pods.go:61] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.055711   10795 system_pods.go:61] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending
	I1025 08:30:59.055719   10795 system_pods.go:61] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.055723   10795 system_pods.go:61] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.055726   10795 system_pods.go:61] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.055730   10795 system_pods.go:61] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.055736   10795 system_pods.go:61] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.055740   10795 system_pods.go:61] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.055743   10795 system_pods.go:61] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.055748   10795 system_pods.go:61] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.055757   10795 system_pods.go:61] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.055762   10795 system_pods.go:61] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.055768   10795 system_pods.go:61] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.055773   10795 system_pods.go:61] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.055790   10795 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.055798   10795 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending
	I1025 08:30:59.055803   10795 system_pods.go:61] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:59.055808   10795 system_pods.go:74] duration metric: took 3.265934ms to wait for pod list to return data ...
	I1025 08:30:59.055815   10795 default_sa.go:34] waiting for default service account to be created ...
	I1025 08:30:59.059157   10795 default_sa.go:45] found service account: "default"
	I1025 08:30:59.059180   10795 default_sa.go:55] duration metric: took 3.359054ms for default service account to be created ...
	I1025 08:30:59.059191   10795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:30:59.062520   10795 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:59.062546   10795 system_pods.go:89] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending
	I1025 08:30:59.062554   10795 system_pods.go:89] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:30:59.062559   10795 system_pods.go:89] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.062567   10795 system_pods.go:89] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.062570   10795 system_pods.go:89] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending
	I1025 08:30:59.062574   10795 system_pods.go:89] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.062578   10795 system_pods.go:89] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.062585   10795 system_pods.go:89] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.062590   10795 system_pods.go:89] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.062598   10795 system_pods.go:89] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.062602   10795 system_pods.go:89] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.062608   10795 system_pods.go:89] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.062613   10795 system_pods.go:89] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.062621   10795 system_pods.go:89] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.062629   10795 system_pods.go:89] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.062636   10795 system_pods.go:89] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.062661   10795 system_pods.go:89] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.062669   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.062676   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending
	I1025 08:30:59.062685   10795 system_pods.go:89] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:59.062698   10795 retry.go:31] will retry after 276.863432ms: missing components: kube-dns
	I1025 08:30:59.264153   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.364941   10795 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:59.364981   10795 system_pods.go:89] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:30:59.364991   10795 system_pods.go:89] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:30:59.365003   10795 system_pods.go:89] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.365013   10795 system_pods.go:89] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.365022   10795 system_pods.go:89] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:30:59.365037   10795 system_pods.go:89] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.365048   10795 system_pods.go:89] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.365055   10795 system_pods.go:89] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.365066   10795 system_pods.go:89] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.365075   10795 system_pods.go:89] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.365084   10795 system_pods.go:89] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.365093   10795 system_pods.go:89] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.365104   10795 system_pods.go:89] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.365113   10795 system_pods.go:89] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.365124   10795 system_pods.go:89] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.365133   10795 system_pods.go:89] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.365144   10795 system_pods.go:89] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.365155   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.365172   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.365184   10795 system_pods.go:89] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:59.365207   10795 retry.go:31] will retry after 291.667738ms: missing components: kube-dns
	I1025 08:30:59.458777   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:59.459270   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:59.459656   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:59.660822   10795 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:59.660855   10795 system_pods.go:89] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:30:59.660862   10795 system_pods.go:89] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Running
	I1025 08:30:59.660873   10795 system_pods.go:89] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.660879   10795 system_pods.go:89] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.660885   10795 system_pods.go:89] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:30:59.660889   10795 system_pods.go:89] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.660892   10795 system_pods.go:89] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.660899   10795 system_pods.go:89] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.660902   10795 system_pods.go:89] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.660908   10795 system_pods.go:89] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.660913   10795 system_pods.go:89] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.660922   10795 system_pods.go:89] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.660930   10795 system_pods.go:89] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.660941   10795 system_pods.go:89] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.660954   10795 system_pods.go:89] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.660961   10795 system_pods.go:89] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.660969   10795 system_pods.go:89] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.660974   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.660981   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.660985   10795 system_pods.go:89] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Running
	I1025 08:30:59.660994   10795 system_pods.go:126] duration metric: took 601.798324ms to wait for k8s-apps to be running ...
	I1025 08:30:59.661005   10795 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:30:59.661060   10795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:30:59.674653   10795 system_svc.go:56] duration metric: took 13.626415ms WaitForService to wait for kubelet
	I1025 08:30:59.674688   10795 kubeadm.go:586] duration metric: took 41.717881334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:30:59.674712   10795 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:30:59.677206   10795 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 08:30:59.677235   10795 node_conditions.go:123] node cpu capacity is 8
	I1025 08:30:59.677250   10795 node_conditions.go:105] duration metric: took 2.527783ms to run NodePressure ...
	I1025 08:30:59.677264   10795 start.go:241] waiting for startup goroutines ...
	I1025 08:30:59.762112   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.947956   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:59.948167   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:59.948183   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.262393   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:00.451199   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.451503   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:00.451821   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.762078   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:00.949763   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.949774   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.950085   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.262202   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:01.448724   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.448910   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.449055   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:01.762030   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:01.949270   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.949355   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.949525   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.262326   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.448745   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.448991   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.449024   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.762220   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.948688   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.948941   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.948943   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.262570   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:03.449308   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.449699   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.449701   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.760937   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:03.948935   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.948987   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.949073   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.262225   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:04.448275   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.448299   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.448419   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.761014   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:04.949145   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.949326   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.949572   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.262294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.448558   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.448875   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.449038   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.761465   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.948372   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.948596   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.948687   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.262185   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.448111   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.448322   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.448324   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.761278   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.948602   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.948729   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.948772   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.261459   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.448848   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.448931   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.449002   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:07.761754   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.949632   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.950421   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.950737   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.262110   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.448450   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.448531   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.448573   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.762474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.948861   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.948916   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.948949   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.261722   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.449028   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.449174   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.449231   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.762142   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.948069   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.948367   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.948837   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.261823   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.449119   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.449183   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.449412   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.762381   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.949499   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.949631   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.949633   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.261328   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.449676   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.449721   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.449752   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.762159   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.948286   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.948321   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.948359   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.050135   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:12.261206   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.449407   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.449590   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.449718   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:12.674879   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:12.674920   10795 retry.go:31] will retry after 26.963689157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:12.762091   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.948699   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.948717   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.948889   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.262556   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.448571   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.448590   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.448784   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.761479   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.949262   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.949294   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.949378   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.262561   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.449290   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.449313   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.449291   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.762121   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.948612   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.948893   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.949035   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.261868   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.449169   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.449193   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.449304   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.762740   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.949087   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.949174   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.949268   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.261726   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.448467   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.448527   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.448553   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.878822   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.979604   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.979746   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.979797   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.261105   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.448004   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.448082   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.448294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.762143   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.948042   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.948125   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.948280   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.261292   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.448221   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.448314   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.448340   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.761754   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.948612   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.948796   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.948795   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.261444   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.448554   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.448677   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.448693   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.761594   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.948529   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.948676   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.948819   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.261714   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.448373   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.448516   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.448756   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.761188   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.948294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.948317   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.948376   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.261815   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.449212   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.449266   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.449492   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:21.762329   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.948349   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.948392   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.948449   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.261966   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.448967   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.449082   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.449156   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.761984   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.947969   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.948020   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.948124   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.261759   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.448918   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.448954   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.448964   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.762051   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.949294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.949339   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.950255   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.262015   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.449284   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.449413   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.449423   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.761745   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.948833   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.948929   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.948954   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.261773   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.449087   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.449116   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.449124   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.761959   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.949284   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.949307   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.949474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.261839   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.451837   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.452029   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.452149   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.762083   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.947674   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.948025   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.948046   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.263328   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.483566   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.484602   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.485898   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.761707   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.953395   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.953660   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.953810   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.262122   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.447929   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.447963   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.448220   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.763514   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.949340   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.949380   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.949548   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.261633   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.448803   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.448807   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.448882   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.762030   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.947964   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.947975   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.948016   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.262251   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.449245   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.449298   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.449304   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.763508   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.949188   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.949384   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.949401   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.263700   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.449152   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.449209   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.449403   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.762000   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.949091   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.949148   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.949326   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.262793   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.450023   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:32.450894   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.451683   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.762599   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.948989   10795 kapi.go:107] duration metric: took 1m13.003787796s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 08:31:32.949046   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.949071   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.262111   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.448206   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.448233   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.762503   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.948461   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.948597   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.261453   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.448617   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.448973   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.761506   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.948777   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.948854   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.261442   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.448748   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.448801   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.762312   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.949104   10795 kapi.go:107] duration metric: took 1m16.004049712s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 08:31:35.949132   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.262203   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.448437   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.761222   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.948054   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.265031   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.450970   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.763730   10795 kapi.go:107] duration metric: took 1m11.505271253s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 08:31:37.765819   10795 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-475995 cluster.
	I1025 08:31:37.767230   10795 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 08:31:37.768458   10795 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 08:31:37.949188   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.448661   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.949064   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.448331   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.639451   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:39.949990   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:40.348837   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.348872   10795 retry.go:31] will retry after 25.783943494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.449331   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.948661   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.449415   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.948867   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.449514   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.948933   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.449354   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.948900   10795 kapi.go:107] duration metric: took 1m24.003842816s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 08:32:06.135724   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 08:32:06.677419   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 08:32:06.677528   10795 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 08:32:06.679630   10795 out.go:179] * Enabled addons: ingress-dns, storage-provisioner, registry-creds, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 08:32:06.680924   10795 addons.go:514] duration metric: took 1m48.724096193s for enable addons: enabled=[ingress-dns storage-provisioner registry-creds cloud-spanner amd-gpu-device-plugin nvidia-device-plugin default-storageclass metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 08:32:06.680967   10795 start.go:246] waiting for cluster config update ...
	I1025 08:32:06.680992   10795 start.go:255] writing updated cluster config ...
	I1025 08:32:06.681235   10795 ssh_runner.go:195] Run: rm -f paused
	I1025 08:32:06.685453   10795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:06.689073   10795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nfrz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.693338   10795 pod_ready.go:94] pod "coredns-66bc5c9577-8nfrz" is "Ready"
	I1025 08:32:06.693368   10795 pod_ready.go:86] duration metric: took 4.274014ms for pod "coredns-66bc5c9577-8nfrz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.695276   10795 pod_ready.go:83] waiting for pod "etcd-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.699042   10795 pod_ready.go:94] pod "etcd-addons-475995" is "Ready"
	I1025 08:32:06.699066   10795 pod_ready.go:86] duration metric: took 3.767509ms for pod "etcd-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.700842   10795 pod_ready.go:83] waiting for pod "kube-apiserver-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.704447   10795 pod_ready.go:94] pod "kube-apiserver-addons-475995" is "Ready"
	I1025 08:32:06.704472   10795 pod_ready.go:86] duration metric: took 3.609483ms for pod "kube-apiserver-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.706211   10795 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.088997   10795 pod_ready.go:94] pod "kube-controller-manager-addons-475995" is "Ready"
	I1025 08:32:07.089029   10795 pod_ready.go:86] duration metric: took 382.799332ms for pod "kube-controller-manager-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.289270   10795 pod_ready.go:83] waiting for pod "kube-proxy-4qm6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.688963   10795 pod_ready.go:94] pod "kube-proxy-4qm6g" is "Ready"
	I1025 08:32:07.688991   10795 pod_ready.go:86] duration metric: took 399.694794ms for pod "kube-proxy-4qm6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.889449   10795 pod_ready.go:83] waiting for pod "kube-scheduler-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:08.289422   10795 pod_ready.go:94] pod "kube-scheduler-addons-475995" is "Ready"
	I1025 08:32:08.289482   10795 pod_ready.go:86] duration metric: took 399.935083ms for pod "kube-scheduler-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:08.289493   10795 pod_ready.go:40] duration metric: took 1.604013016s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:08.333893   10795 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 08:32:08.335829   10795 out.go:179] * Done! kubectl is now configured to use "addons-475995" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.512740195Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-pd2np/POD" id=64395a0a-da09-4b4b-aa42-62b1f438a474 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.512818206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.519220915Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pd2np Namespace:default ID:9ae5e96018e66aa1cf23809c306a9468bc29bd45a0b3a98124522fc101d0dd44 UID:78cac399-836a-45e4-b1ed-d5014ba7f91c NetNS:/var/run/netns/f30ffef2-5e6e-44a0-baef-a84e502f5674 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002769f0}] Aliases:map[]}"
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.519261275Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-pd2np to CNI network \"kindnet\" (type=ptp)"
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.529802065Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pd2np Namespace:default ID:9ae5e96018e66aa1cf23809c306a9468bc29bd45a0b3a98124522fc101d0dd44 UID:78cac399-836a-45e4-b1ed-d5014ba7f91c NetNS:/var/run/netns/f30ffef2-5e6e-44a0-baef-a84e502f5674 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002769f0}] Aliases:map[]}"
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.52996678Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-pd2np for CNI network kindnet (type=ptp)"
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.530911818Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.53173971Z" level=info msg="Ran pod sandbox 9ae5e96018e66aa1cf23809c306a9468bc29bd45a0b3a98124522fc101d0dd44 with infra container: default/hello-world-app-5d498dc89-pd2np/POD" id=64395a0a-da09-4b4b-aa42-62b1f438a474 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.533049995Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ec0c3b30-7dfd-4fa1-a8bb-49b9c75d1842 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.533181075Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ec0c3b30-7dfd-4fa1-a8bb-49b9c75d1842 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.533230843Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ec0c3b30-7dfd-4fa1-a8bb-49b9c75d1842 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.533949657Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1583c5b4-b6c8-4ecd-87ca-051e6387dd2c name=/runtime.v1.ImageService/PullImage
	Oct 25 08:34:50 addons-475995 crio[766]: time="2025-10-25T08:34:50.551394353Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.299113898Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=1583c5b4-b6c8-4ecd-87ca-051e6387dd2c name=/runtime.v1.ImageService/PullImage
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.299762089Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=de5f47bb-e140-4d07-9293-e31a23296431 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.301365534Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0bdc31e0-2a8c-4b64-a230-02ad3c9cda30 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.305064522Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-pd2np/hello-world-app" id=818324a2-cbb0-4164-92b3-8adbbab833b4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.305175003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.310578399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.310785081Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d86ff3252c549073e9762aa1584d35fead4d2de0b8234b545af88100e9dc5441/merged/etc/passwd: no such file or directory"
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.310811898Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d86ff3252c549073e9762aa1584d35fead4d2de0b8234b545af88100e9dc5441/merged/etc/group: no such file or directory"
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.311079983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.343696207Z" level=info msg="Created container 5c7be278da561fd9187944013e1bf0bec9d2b82a941c309fc9ffec5cf7e8e8fa: default/hello-world-app-5d498dc89-pd2np/hello-world-app" id=818324a2-cbb0-4164-92b3-8adbbab833b4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.344412013Z" level=info msg="Starting container: 5c7be278da561fd9187944013e1bf0bec9d2b82a941c309fc9ffec5cf7e8e8fa" id=96ff2560-6322-449d-830a-4a362baf56d2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 08:34:51 addons-475995 crio[766]: time="2025-10-25T08:34:51.346528804Z" level=info msg="Started container" PID=9981 containerID=5c7be278da561fd9187944013e1bf0bec9d2b82a941c309fc9ffec5cf7e8e8fa description=default/hello-world-app-5d498dc89-pd2np/hello-world-app id=96ff2560-6322-449d-830a-4a362baf56d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ae5e96018e66aa1cf23809c306a9468bc29bd45a0b3a98124522fc101d0dd44
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	5c7be278da561       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   9ae5e96018e66       hello-world-app-5d498dc89-pd2np             default
	b7ef1b51ff11f       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   4d9783777c627       registry-creds-764b6fb674-rq26r             kube-system
	f7287863e4716       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   12bc2c69104eb       nginx                                       default
	5260d1e3f01ab       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   86085a08c3178       busybox                                     default
	bab891b7af1f4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	22f2b9269ef02       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	8de87df506db7       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	101a2932de347       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	7f9bf3508d183       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	32a6ca3d206b0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   3d2754cfb52fa       gadget-n5ndm                                gadget
	1e80a58fe2589       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   aac61d2cc9f95       gcp-auth-78565c9fb4-lch5j                   gcp-auth
	e83e5239a5fd8       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   5aeaa6cb706c6       ingress-nginx-controller-675c5ddd98-mdshg   ingress-nginx
	b23168cf49c8b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   60d3d07cc8953       registry-proxy-twv4t                        kube-system
	9ebf337144234       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   8775c1c70888e       nvidia-device-plugin-daemonset-lbh6g        kube-system
	e6efa48ea6a2f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	2107300ec375f       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   4872c0c4279a6       metrics-server-85b7d694d7-5wn89             kube-system
	74693a35fd3fc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   d93563f8b8bdf       amd-gpu-device-plugin-6mxn7                 kube-system
	7358a40adba97       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   a082dd61a282b       csi-hostpath-resizer-0                      kube-system
	2f476752a0079       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   67cfbbb3f470f       snapshot-controller-7d9fbc56b8-8qx69        kube-system
	956b214b91f1c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   7b5889d5da2f8       snapshot-controller-7d9fbc56b8-mcjmk        kube-system
	ecf62df96b889       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   f5c59d674a72b       csi-hostpath-attacher-0                     kube-system
	594922a23e3cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   72467b81df621       ingress-nginx-admission-patch-49wjr         ingress-nginx
	e1b1ef989389c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   4929cfc5d6339       ingress-nginx-admission-create-2j77z        ingress-nginx
	f9f537f8ebc4f       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   01eaed274722b       yakd-dashboard-5ff678cb9-2ntvm              yakd-dashboard
	5f2a1a6adc37e       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   66eaadae61133       cloud-spanner-emulator-86bd5cbb97-zlql6     default
	d30403917ed89       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   d099737d45cbb       registry-6b586f9694-pw542                   kube-system
	4f77508bbc9ac       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   5879c159ca06b       local-path-provisioner-648f6765c9-th4g2     local-path-storage
	09848150de892       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   7c520df262094       kube-ingress-dns-minikube                   kube-system
	02939bc11915d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   1f95254a870ce       coredns-66bc5c9577-8nfrz                    kube-system
	76b61de4dd3d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   ca7089c35a6e2       storage-provisioner                         kube-system
	ca5be89b6d548       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   6999cf831fc9b       kube-proxy-4qm6g                            kube-system
	19c714713a8d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   41e06c3365086       kindnet-r5lvv                               kube-system
	b8679170a4379       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   d0ef16387f954       kube-controller-manager-addons-475995       kube-system
	7ca23082c83a4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   7507d6f7d05c6       kube-apiserver-addons-475995                kube-system
	c092ee6bc7618       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   4c47d9eecbe27       kube-scheduler-addons-475995                kube-system
	8f6f29d5a814c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   c4545b16cee56       etcd-addons-475995                          kube-system
	
	
	==> coredns [02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc] <==
	[INFO] 10.244.0.21:56306 - 31935 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006690668s
	[INFO] 10.244.0.21:42337 - 14247 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005460615s
	[INFO] 10.244.0.21:35163 - 63669 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005798292s
	[INFO] 10.244.0.21:37011 - 48486 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004800581s
	[INFO] 10.244.0.21:59703 - 62947 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005460404s
	[INFO] 10.244.0.21:35596 - 49088 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001748502s
	[INFO] 10.244.0.21:44357 - 52035 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002260823s
	[INFO] 10.244.0.26:45790 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000249198s
	[INFO] 10.244.0.26:52467 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123907s
	[INFO] 10.244.0.31:51377 - 41864 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000186143s
	[INFO] 10.244.0.31:52415 - 876 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000260634s
	[INFO] 10.244.0.31:53525 - 57156 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000141253s
	[INFO] 10.244.0.31:35599 - 24378 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000158011s
	[INFO] 10.244.0.31:51705 - 58985 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000096781s
	[INFO] 10.244.0.31:34312 - 49031 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00011528s
	[INFO] 10.244.0.31:47839 - 16762 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003149426s
	[INFO] 10.244.0.31:51627 - 37236 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003224847s
	[INFO] 10.244.0.31:49706 - 26733 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004045313s
	[INFO] 10.244.0.31:36832 - 6813 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004153783s
	[INFO] 10.244.0.31:36055 - 1106 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005094579s
	[INFO] 10.244.0.31:57207 - 10580 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005179424s
	[INFO] 10.244.0.31:33397 - 37758 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004435033s
	[INFO] 10.244.0.31:59472 - 47541 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004939193s
	[INFO] 10.244.0.31:58441 - 38745 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00175866s
	[INFO] 10.244.0.31:44805 - 15694 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001739549s
	
	
	==> describe nodes <==
	Name:               addons-475995
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-475995
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=addons-475995
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_30_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-475995
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-475995"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:30:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-475995
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:34:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:34:27 +0000   Sat, 25 Oct 2025 08:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:34:27 +0000   Sat, 25 Oct 2025 08:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:34:27 +0000   Sat, 25 Oct 2025 08:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:34:27 +0000   Sat, 25 Oct 2025 08:30:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-475995
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                68791317-d9d7-499d-a824-0c15109dc003
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  default                     cloud-spanner-emulator-86bd5cbb97-zlql6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  default                     hello-world-app-5d498dc89-pd2np              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-n5ndm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  gcp-auth                    gcp-auth-78565c9fb4-lch5j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-mdshg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m32s
	  kube-system                 amd-gpu-device-plugin-6mxn7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-66bc5c9577-8nfrz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m34s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 csi-hostpathplugin-kswpf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-addons-475995                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m39s
	  kube-system                 kindnet-r5lvv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m34s
	  kube-system                 kube-apiserver-addons-475995                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-475995        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-4qm6g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-scheduler-addons-475995                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 metrics-server-85b7d694d7-5wn89              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m32s
	  kube-system                 nvidia-device-plugin-daemonset-lbh6g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 registry-6b586f9694-pw542                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 registry-creds-764b6fb674-rq26r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 registry-proxy-twv4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 snapshot-controller-7d9fbc56b8-8qx69         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 snapshot-controller-7d9fbc56b8-mcjmk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  local-path-storage          local-path-provisioner-648f6765c9-th4g2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2ntvm               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node addons-475995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node addons-475995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x8 over 4m44s)  kubelet          Node addons-475995 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s                  kubelet          Node addons-475995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s                  kubelet          Node addons-475995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s                  kubelet          Node addons-475995 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m35s                  node-controller  Node addons-475995 event: Registered Node addons-475995 in Controller
	  Normal  NodeReady                3m53s                  kubelet          Node addons-475995 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6] <==
	{"level":"warn","ts":"2025-10-25T08:30:09.253424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.259336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.265104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.271693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.278298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.285062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.291033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.296819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.302829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.308624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.315272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.330980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.343677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:20.504366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:20.511552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:46.792912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:46.799432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:46.824181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59090","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T08:31:16.670995Z","caller":"traceutil/trace.go:172","msg":"trace[1851652172] transaction","detail":"{read_only:false; response_revision:1082; number_of_response:1; }","duration":"120.454278ms","start":"2025-10-25T08:31:16.550519Z","end":"2025-10-25T08:31:16.670973Z","steps":["trace[1851652172] 'process raft request'  (duration: 120.324468ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:31:16.853583Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.299369ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:16.853690Z","caller":"traceutil/trace.go:172","msg":"trace[786712106] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1082; }","duration":"126.423617ms","start":"2025-10-25T08:31:16.727249Z","end":"2025-10-25T08:31:16.853672Z","steps":["trace[786712106] 'agreement among raft nodes before linearized reading'  (duration: 64.673615ms)","trace[786712106] 'range keys from in-memory index tree'  (duration: 61.576787ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:31:16.853735Z","caller":"traceutil/trace.go:172","msg":"trace[1627505551] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"178.010546ms","start":"2025-10-25T08:31:16.675706Z","end":"2025-10-25T08:31:16.853716Z","steps":["trace[1627505551] 'process raft request'  (duration: 116.226184ms)","trace[1627505551] 'compare'  (duration: 61.595534ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T08:31:16.877398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.415379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:16.877528Z","caller":"traceutil/trace.go:172","msg":"trace[1241973784] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1083; }","duration":"117.554095ms","start":"2025-10-25T08:31:16.759955Z","end":"2025-10-25T08:31:16.877509Z","steps":["trace[1241973784] 'agreement among raft nodes before linearized reading'  (duration: 93.810302ms)","trace[1241973784] 'range keys from in-memory index tree'  (duration: 23.58797ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:31:31.390692Z","caller":"traceutil/trace.go:172","msg":"trace[210320775] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"127.376797ms","start":"2025-10-25T08:31:31.263296Z","end":"2025-10-25T08:31:31.390673Z","steps":["trace[210320775] 'process raft request'  (duration: 62.840888ms)","trace[210320775] 'compare'  (duration: 64.375909ms)"],"step_count":2}
	
	
	==> gcp-auth [1e80a58fe258978872bb179984502f28d7bb245cad29c0add898927058c6beb6] <==
	2025/10/25 08:31:36 GCP Auth Webhook started!
	2025/10/25 08:32:08 Ready to marshal response ...
	2025/10/25 08:32:08 Ready to write response ...
	2025/10/25 08:32:08 Ready to marshal response ...
	2025/10/25 08:32:08 Ready to write response ...
	2025/10/25 08:32:08 Ready to marshal response ...
	2025/10/25 08:32:08 Ready to write response ...
	2025/10/25 08:32:19 Ready to marshal response ...
	2025/10/25 08:32:19 Ready to write response ...
	2025/10/25 08:32:19 Ready to marshal response ...
	2025/10/25 08:32:19 Ready to write response ...
	2025/10/25 08:32:26 Ready to marshal response ...
	2025/10/25 08:32:26 Ready to write response ...
	2025/10/25 08:32:26 Ready to marshal response ...
	2025/10/25 08:32:26 Ready to write response ...
	2025/10/25 08:32:27 Ready to marshal response ...
	2025/10/25 08:32:27 Ready to write response ...
	2025/10/25 08:32:33 Ready to marshal response ...
	2025/10/25 08:32:33 Ready to write response ...
	2025/10/25 08:32:48 Ready to marshal response ...
	2025/10/25 08:32:48 Ready to write response ...
	2025/10/25 08:34:50 Ready to marshal response ...
	2025/10/25 08:34:50 Ready to write response ...
	
	
	==> kernel <==
	 08:34:51 up 17 min,  0 user,  load average: 0.20, 0.56, 0.30
	Linux addons-475995 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44] <==
	I1025 08:32:48.779832       1 main.go:301] handling current node
	I1025 08:32:58.780748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:58.780788       1 main.go:301] handling current node
	I1025 08:33:08.781738       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:08.781789       1 main.go:301] handling current node
	I1025 08:33:18.780841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:18.780876       1 main.go:301] handling current node
	I1025 08:33:28.780532       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:28.780574       1 main.go:301] handling current node
	I1025 08:33:38.781738       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:38.781781       1 main.go:301] handling current node
	I1025 08:33:48.780204       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:48.780243       1 main.go:301] handling current node
	I1025 08:33:58.780571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:58.780602       1 main.go:301] handling current node
	I1025 08:34:08.780797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:08.780838       1 main.go:301] handling current node
	I1025 08:34:18.780588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:18.780617       1 main.go:301] handling current node
	I1025 08:34:28.780174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:28.780218       1 main.go:301] handling current node
	I1025 08:34:38.780407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:38.780438       1 main.go:301] handling current node
	I1025 08:34:48.787459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:48.787490       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2] <==
	E1025 08:31:20.124030       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 08:31:20.124040       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:31:20.127168       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:31:20.127217       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 08:31:20.127232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:31:37.589895       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:31:37.589976       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 08:31:37.590390       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.591900       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.597447       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.618280       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.659821       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	I1025 08:31:37.777418       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 08:32:16.032414       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47676: use of closed network connection
	E1025 08:32:16.178779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47702: use of closed network connection
	I1025 08:32:27.448329       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 08:32:27.660697       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.234.116"}
	I1025 08:32:43.335868       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 08:34:50.272788       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.174.195"}
	
	
	==> kube-controller-manager [b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83] <==
	I1025 08:30:16.776185       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 08:30:16.776277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 08:30:16.776282       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 08:30:16.776175       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:30:16.776350       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 08:30:16.776352       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 08:30:16.776365       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 08:30:16.776442       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 08:30:16.777544       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 08:30:16.777591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 08:30:16.778843       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 08:30:16.783069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:30:16.784309       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 08:30:16.794814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 08:30:19.206123       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 08:30:46.787157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:30:46.787312       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 08:30:46.787377       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 08:30:46.804375       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 08:30:46.810905       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 08:30:46.888258       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:30:46.911682       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:31:01.740919       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 08:31:16.892614       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:31:16.919463       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25] <==
	I1025 08:30:18.308803       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:30:18.413944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:30:18.514107       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:30:18.515465       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:30:18.515708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:30:18.777195       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:30:18.777317       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:30:18.827533       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:30:18.835085       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:30:18.835144       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:30:18.843993       1 config.go:200] "Starting service config controller"
	I1025 08:30:18.849172       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:30:18.845780       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:30:18.849309       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:30:18.845796       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:30:18.849361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:30:18.845336       1 config.go:309] "Starting node config controller"
	I1025 08:30:18.849405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:30:18.849437       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:30:18.949725       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:30:18.950316       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 08:30:18.950335       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e] <==
	E1025 08:30:09.796900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:30:09.797274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:30:09.797545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:30:09.797613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:09.797665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:09.797719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:30:09.797761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:30:09.797771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:30:09.797801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:09.797819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:30:09.797852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:09.797852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:30:09.797904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:09.797913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:09.798009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:09.798047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:10.692894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:10.752188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:10.905013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:10.983073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:10.998270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:11.003214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:11.008110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:11.075576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 08:30:12.895206       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.703243    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47333382-e40b-46c0-b4eb-f0f26e16f8f2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "47333382-e40b-46c0-b4eb-f0f26e16f8f2" (UID: "47333382-e40b-46c0-b4eb-f0f26e16f8f2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.705230    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47333382-e40b-46c0-b4eb-f0f26e16f8f2-kube-api-access-jq2dp" (OuterVolumeSpecName: "kube-api-access-jq2dp") pod "47333382-e40b-46c0-b4eb-f0f26e16f8f2" (UID: "47333382-e40b-46c0-b4eb-f0f26e16f8f2"). InnerVolumeSpecName "kube-api-access-jq2dp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.706142    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^2f68dd47-b17d-11f0-b703-aa06cd0f095b" (OuterVolumeSpecName: "task-pv-storage") pod "47333382-e40b-46c0-b4eb-f0f26e16f8f2" (UID: "47333382-e40b-46c0-b4eb-f0f26e16f8f2"). InnerVolumeSpecName "pvc-f9bef04c-78ed-4778-a48c-6697625b447f". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.803973    1307 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f9bef04c-78ed-4778-a48c-6697625b447f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2f68dd47-b17d-11f0-b703-aa06cd0f095b\") on node \"addons-475995\" "
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.804012    1307 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/47333382-e40b-46c0-b4eb-f0f26e16f8f2-gcp-creds\") on node \"addons-475995\" DevicePath \"\""
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.804024    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq2dp\" (UniqueName: \"kubernetes.io/projected/47333382-e40b-46c0-b4eb-f0f26e16f8f2-kube-api-access-jq2dp\") on node \"addons-475995\" DevicePath \"\""
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.808314    1307 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f9bef04c-78ed-4778-a48c-6697625b447f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2f68dd47-b17d-11f0-b703-aa06cd0f095b") on node "addons-475995"
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.904921    1307 reconciler_common.go:299] "Volume detached for volume \"pvc-f9bef04c-78ed-4778-a48c-6697625b447f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2f68dd47-b17d-11f0-b703-aa06cd0f095b\") on node \"addons-475995\" DevicePath \"\""
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.950540    1307 scope.go:117] "RemoveContainer" containerID="a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9"
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.960473    1307 scope.go:117] "RemoveContainer" containerID="a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9"
	Oct 25 08:32:55 addons-475995 kubelet[1307]: E1025 08:32:55.960875    1307 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9\": container with ID starting with a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9 not found: ID does not exist" containerID="a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9"
	Oct 25 08:32:55 addons-475995 kubelet[1307]: I1025 08:32:55.960926    1307 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9"} err="failed to get container status \"a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9\": rpc error: code = NotFound desc = could not find container \"a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9\": container with ID starting with a73294b5dcac06178d2d017bd3eb79c33d74767f8ff1eefd70ecfd73d6a6efa9 not found: ID does not exist"
	Oct 25 08:32:56 addons-475995 kubelet[1307]: I1025 08:32:56.305187    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47333382-e40b-46c0-b4eb-f0f26e16f8f2" path="/var/lib/kubelet/pods/47333382-e40b-46c0-b4eb-f0f26e16f8f2/volumes"
	Oct 25 08:33:01 addons-475995 kubelet[1307]: E1025 08:33:01.914609    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-rq26r" podUID="2efaa5a3-60c5-4bdf-95a9-a203d74287d0"
	Oct 25 08:33:12 addons-475995 kubelet[1307]: I1025 08:33:12.322150    1307 scope.go:117] "RemoveContainer" containerID="d32b2e112bed15408475278c971e93d6c3673967a56a3aad031663575d836f0d"
	Oct 25 08:33:12 addons-475995 kubelet[1307]: I1025 08:33:12.329847    1307 scope.go:117] "RemoveContainer" containerID="2d186b8ae9971921ce7f4253d4e2e454502e9a073b1a25eb9af8483ea5e82951"
	Oct 25 08:33:12 addons-475995 kubelet[1307]: I1025 08:33:12.337623    1307 scope.go:117] "RemoveContainer" containerID="d8b179dbf3dfc707b2d72e9f644af7653e99e8d1938ba634647687079384f4f1"
	Oct 25 08:33:12 addons-475995 kubelet[1307]: I1025 08:33:12.344870    1307 scope.go:117] "RemoveContainer" containerID="1986148c8d79f1de304433624e577b07cc87d1bd09027f3a940f70d05a600d47"
	Oct 25 08:33:17 addons-475995 kubelet[1307]: I1025 08:33:17.047601    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-rq26r" podStartSLOduration=177.985181069 podStartE2EDuration="2m59.047579759s" podCreationTimestamp="2025-10-25 08:30:18 +0000 UTC" firstStartedPulling="2025-10-25 08:33:15.325501914 +0000 UTC m=+183.098102254" lastFinishedPulling="2025-10-25 08:33:16.387900618 +0000 UTC m=+184.160500944" observedRunningTime="2025-10-25 08:33:17.046724774 +0000 UTC m=+184.819325116" watchObservedRunningTime="2025-10-25 08:33:17.047579759 +0000 UTC m=+184.820180102"
	Oct 25 08:33:46 addons-475995 kubelet[1307]: I1025 08:33:46.303206    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-lbh6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:33:46 addons-475995 kubelet[1307]: I1025 08:33:46.303464    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6mxn7" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:33:54 addons-475995 kubelet[1307]: I1025 08:33:54.303296    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-twv4t" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:34:50 addons-475995 kubelet[1307]: I1025 08:34:50.302455    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqxm4\" (UniqueName: \"kubernetes.io/projected/78cac399-836a-45e4-b1ed-d5014ba7f91c-kube-api-access-sqxm4\") pod \"hello-world-app-5d498dc89-pd2np\" (UID: \"78cac399-836a-45e4-b1ed-d5014ba7f91c\") " pod="default/hello-world-app-5d498dc89-pd2np"
	Oct 25 08:34:50 addons-475995 kubelet[1307]: I1025 08:34:50.302552    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/78cac399-836a-45e4-b1ed-d5014ba7f91c-gcp-creds\") pod \"hello-world-app-5d498dc89-pd2np\" (UID: \"78cac399-836a-45e4-b1ed-d5014ba7f91c\") " pod="default/hello-world-app-5d498dc89-pd2np"
	Oct 25 08:34:51 addons-475995 kubelet[1307]: I1025 08:34:51.392362    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-pd2np" podStartSLOduration=0.625298477 podStartE2EDuration="1.392344678s" podCreationTimestamp="2025-10-25 08:34:50 +0000 UTC" firstStartedPulling="2025-10-25 08:34:50.533561793 +0000 UTC m=+278.306162129" lastFinishedPulling="2025-10-25 08:34:51.300608007 +0000 UTC m=+279.073208330" observedRunningTime="2025-10-25 08:34:51.391368662 +0000 UTC m=+279.163969004" watchObservedRunningTime="2025-10-25 08:34:51.392344678 +0000 UTC m=+279.164945023"
	
	
	==> storage-provisioner [76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f] <==
	W1025 08:34:26.274555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:28.277724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:28.281540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:30.286042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:30.289955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:32.292827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:32.297808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:34.300684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:34.304458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:36.306954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:36.312049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:38.314725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:38.319625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:40.323006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:40.327072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:42.329925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:42.333711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:44.336442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:44.340394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:46.343561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:46.347375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:48.350245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:48.353931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:50.357320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:34:50.361283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-475995 -n addons-475995
helpers_test.go:269: (dbg) Run:  kubectl --context addons-475995 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-475995 describe pod ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-475995 describe pod ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr: exit status 1 (56.560655ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2j77z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-49wjr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-475995 describe pod ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (247.305019ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:34:52.823048   25485 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:34:52.823401   25485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:52.823414   25485 out.go:374] Setting ErrFile to fd 2...
	I1025 08:34:52.823421   25485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:52.823745   25485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:34:52.824104   25485 mustload.go:65] Loading cluster: addons-475995
	I1025 08:34:52.824598   25485 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:52.824617   25485 addons.go:606] checking whether the cluster is paused
	I1025 08:34:52.824759   25485 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:52.824781   25485 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:34:52.825305   25485 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:34:52.844957   25485 ssh_runner.go:195] Run: systemctl --version
	I1025 08:34:52.845018   25485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:34:52.862298   25485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:34:52.962330   25485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:34:52.962408   25485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:34:52.991388   25485 cri.go:89] found id: "b7ef1b51ff11f03e3b2391486618e44c6a427ab181c54a2deaead32c0e30af5f"
	I1025 08:34:52.991409   25485 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:34:52.991416   25485 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:34:52.991422   25485 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:34:52.991432   25485 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:34:52.991437   25485 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:34:52.991442   25485 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:34:52.991445   25485 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:34:52.991449   25485 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:34:52.991456   25485 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:34:52.991460   25485 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:34:52.991464   25485 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:34:52.991468   25485 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:34:52.991471   25485 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:34:52.991476   25485 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:34:52.991482   25485 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:34:52.991487   25485 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:34:52.991492   25485 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:34:52.991496   25485 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:34:52.991501   25485 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:34:52.991506   25485 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:34:52.991510   25485 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:34:52.991515   25485 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:34:52.991520   25485 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:34:52.991528   25485 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:34:52.991532   25485 cri.go:89] found id: ""
	I1025 08:34:52.991580   25485 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:34:53.006232   25485 out.go:203] 
	W1025 08:34:53.007352   25485 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:34:53.007370   25485 out.go:285] * 
	* 
	W1025 08:34:53.010370   25485 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:34:53.011632   25485 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable ingress --alsologtostderr -v=1: exit status 11 (243.186065ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:34:53.069966   25546 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:34:53.070335   25546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:53.070350   25546 out.go:374] Setting ErrFile to fd 2...
	I1025 08:34:53.070356   25546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:53.070852   25546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:34:53.071523   25546 mustload.go:65] Loading cluster: addons-475995
	I1025 08:34:53.071937   25546 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:53.071956   25546 addons.go:606] checking whether the cluster is paused
	I1025 08:34:53.072056   25546 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:53.072076   25546 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:34:53.072514   25546 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:34:53.090050   25546 ssh_runner.go:195] Run: systemctl --version
	I1025 08:34:53.090101   25546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:34:53.106634   25546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:34:53.205293   25546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:34:53.205388   25546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:34:53.234882   25546 cri.go:89] found id: "b7ef1b51ff11f03e3b2391486618e44c6a427ab181c54a2deaead32c0e30af5f"
	I1025 08:34:53.234909   25546 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:34:53.234916   25546 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:34:53.234920   25546 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:34:53.234925   25546 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:34:53.234930   25546 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:34:53.234933   25546 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:34:53.234935   25546 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:34:53.234938   25546 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:34:53.234943   25546 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:34:53.234945   25546 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:34:53.234948   25546 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:34:53.234950   25546 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:34:53.234953   25546 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:34:53.234956   25546 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:34:53.234961   25546 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:34:53.234963   25546 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:34:53.234970   25546 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:34:53.234973   25546 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:34:53.234976   25546 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:34:53.234981   25546 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:34:53.234984   25546 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:34:53.234986   25546 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:34:53.234989   25546 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:34:53.234991   25546 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:34:53.234994   25546 cri.go:89] found id: ""
	I1025 08:34:53.235035   25546 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:34:53.249245   25546 out.go:203] 
	W1025 08:34:53.250657   25546 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:34:53.250676   25546 out.go:285] * 
	* 
	W1025 08:34:53.253615   25546 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:34:53.255119   25546 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-n5ndm" [1951f2f1-e61a-4224-9fbb-e4acbf8dc327] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003561055s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (330.390541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:34.641489   22414 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:34.641867   22414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:34.641881   22414 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:34.642023   22414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:34.642416   22414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:34.642797   22414 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:34.643229   22414 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:34.643250   22414 addons.go:606] checking whether the cluster is paused
	I1025 08:32:34.643380   22414 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:34.643411   22414 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:34.644054   22414 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:34.671224   22414 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:34.671280   22414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:34.696103   22414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:34.808713   22414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:34.808836   22414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:34.849857   22414 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:34.849887   22414 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:34.849894   22414 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:34.849899   22414 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:34.849904   22414 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:34.849936   22414 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:34.849947   22414 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:34.849952   22414 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:34.849965   22414 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:34.849972   22414 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:34.849988   22414 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:34.849993   22414 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:34.850004   22414 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:34.850009   22414 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:34.850020   22414 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:34.850031   22414 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:34.850041   22414 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:34.850048   22414 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:34.850053   22414 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:34.850058   22414 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:34.850070   22414 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:34.850075   22414 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:34.850086   22414 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:34.850091   22414 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:34.850095   22414 cri.go:89] found id: ""
	I1025 08:32:34.850146   22414 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:34.868875   22414 out.go:203] 
	W1025 08:32:34.870182   22414 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:34.870203   22414 out.go:285] * 
	* 
	W1025 08:32:34.875831   22414 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:34.877528   22414 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.555721ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I1025 08:32:26.982867    9473 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 08:32:26.982892    9473 kapi.go:107] duration metric: took 4.492848ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:352: "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003506101s
addons_test.go:463: (dbg) Run:  kubectl --context addons-475995 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (249.230037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:32.096103   22157 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:32.096403   22157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:32.096413   22157 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:32.096418   22157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:32.096622   22157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:32.096914   22157 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:32.097246   22157 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:32.097258   22157 addons.go:606] checking whether the cluster is paused
	I1025 08:32:32.097334   22157 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:32.097348   22157 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:32.097741   22157 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:32.116065   22157 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:32.116119   22157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:32.135428   22157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:32.236423   22157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:32.236542   22157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:32.266253   22157 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:32.266275   22157 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:32.266279   22157 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:32.266282   22157 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:32.266285   22157 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:32.266288   22157 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:32.266290   22157 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:32.266293   22157 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:32.266295   22157 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:32.266300   22157 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:32.266303   22157 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:32.266305   22157 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:32.266307   22157 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:32.266310   22157 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:32.266315   22157 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:32.266340   22157 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:32.266349   22157 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:32.266354   22157 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:32.266357   22157 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:32.266360   22157 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:32.266362   22157 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:32.266365   22157 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:32.266367   22157 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:32.266370   22157 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:32.266372   22157 cri.go:89] found id: ""
	I1025 08:32:32.266414   22157 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:32.280955   22157 out.go:203] 
	W1025 08:32:32.282490   22157 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:32.282521   22157 out.go:285] * 
	* 
	W1025 08:32:32.285529   22157 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:32.286794   22157 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (29.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 08:32:26.978405    9473 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.502485ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-475995 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/25 08:32:29 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-475995 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2ab5c269-8950-4143-a4fd-a504959551fa] Pending
helpers_test.go:352: "task-pv-pod" [2ab5c269-8950-4143-a4fd-a504959551fa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2ab5c269-8950-4143-a4fd-a504959551fa] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.002928017s
addons_test.go:572: (dbg) Run:  kubectl --context addons-475995 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-475995 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-475995 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-475995 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-475995 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-475995 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-475995 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [47333382-e40b-46c0-b4eb-f0f26e16f8f2] Pending
helpers_test.go:352: "task-pv-pod-restore" [47333382-e40b-46c0-b4eb-f0f26e16f8f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [47333382-e40b-46c0-b4eb-f0f26e16f8f2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003883706s
addons_test.go:614: (dbg) Run:  kubectl --context addons-475995 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-475995 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-475995 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (238.600334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:56.344134   23181 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:56.344429   23181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:56.344439   23181 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:56.344443   23181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:56.344655   23181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:56.344972   23181 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:56.345291   23181 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:56.345305   23181 addons.go:606] checking whether the cluster is paused
	I1025 08:32:56.345381   23181 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:56.345395   23181 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:56.345769   23181 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:56.363427   23181 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:56.363479   23181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:56.380131   23181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:56.478273   23181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:56.478337   23181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:56.506403   23181 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:56.506433   23181 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:56.506438   23181 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:56.506441   23181 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:56.506443   23181 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:56.506447   23181 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:56.506449   23181 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:56.506453   23181 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:56.506455   23181 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:56.506463   23181 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:56.506466   23181 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:56.506469   23181 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:56.506471   23181 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:56.506474   23181 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:56.506476   23181 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:56.506482   23181 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:56.506486   23181 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:56.506493   23181 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:56.506497   23181 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:56.506501   23181 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:56.506506   23181 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:56.506510   23181 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:56.506514   23181 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:56.506519   23181 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:56.506523   23181 cri.go:89] found id: ""
	I1025 08:32:56.506561   23181 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:56.520484   23181 out.go:203] 
	W1025 08:32:56.521677   23181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:56.521693   23181 out.go:285] * 
	* 
	W1025 08:32:56.524606   23181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:56.525765   23181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (243.406642ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:56.584603   23241 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:56.584904   23241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:56.584915   23241 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:56.584919   23241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:56.585116   23241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:56.585365   23241 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:56.585707   23241 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:56.585722   23241 addons.go:606] checking whether the cluster is paused
	I1025 08:32:56.585800   23241 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:56.585815   23241 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:56.586157   23241 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:56.603710   23241 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:56.603761   23241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:56.620534   23241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:56.719550   23241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:56.719618   23241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:56.748776   23241 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:56.748800   23241 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:56.748804   23241 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:56.748807   23241 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:56.748809   23241 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:56.748814   23241 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:56.748817   23241 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:56.748820   23241 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:56.748822   23241 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:56.748832   23241 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:56.748835   23241 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:56.748838   23241 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:56.748840   23241 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:56.748843   23241 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:56.748846   23241 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:56.748849   23241 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:56.748852   23241 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:56.748856   23241 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:56.748858   23241 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:56.748860   23241 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:56.748863   23241 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:56.748865   23241 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:56.748873   23241 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:56.748878   23241 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:56.748881   23241 cri.go:89] found id: ""
	I1025 08:32:56.748916   23241 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:56.762985   23241 out.go:203] 
	W1025 08:32:56.764362   23241 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:56.764382   23241 out.go:285] * 
	* 
	W1025 08:32:56.767684   23241 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:56.768956   23241 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (29.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-475995 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-475995 --alsologtostderr -v=1: exit status 11 (248.866221ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:16.487228   19469 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:16.487555   19469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:16.487564   19469 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:16.487569   19469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:16.487872   19469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:16.488151   19469 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:16.488551   19469 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:16.488569   19469 addons.go:606] checking whether the cluster is paused
	I1025 08:32:16.488680   19469 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:16.488707   19469 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:16.489147   19469 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:16.507664   19469 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:16.507726   19469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:16.525840   19469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:16.625290   19469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:16.625370   19469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:16.656147   19469 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:16.656188   19469 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:16.656195   19469 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:16.656201   19469 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:16.656205   19469 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:16.656210   19469 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:16.656213   19469 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:16.656217   19469 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:16.656220   19469 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:16.656239   19469 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:16.656245   19469 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:16.656248   19469 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:16.656250   19469 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:16.656253   19469 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:16.656255   19469 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:16.656266   19469 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:16.656272   19469 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:16.656276   19469 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:16.656279   19469 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:16.656281   19469 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:16.656283   19469 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:16.656286   19469 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:16.656288   19469 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:16.656290   19469 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:16.656292   19469 cri.go:89] found id: ""
	I1025 08:32:16.656340   19469 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:16.671071   19469 out.go:203] 
	W1025 08:32:16.672453   19469 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:16.672475   19469 out.go:285] * 
	* 
	W1025 08:32:16.675467   19469 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:16.676834   19469 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-475995 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-475995
helpers_test.go:243: (dbg) docker inspect addons-475995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822",
	        "Created": "2025-10-25T08:29:56.512830024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11458,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:29:56.546472591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/hostname",
	        "HostsPath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/hosts",
	        "LogPath": "/var/lib/docker/containers/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822/231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822-json.log",
	        "Name": "/addons-475995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-475995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-475995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "231e1e8ad0ccea3954faf7c7729467d7e4d25d409f447c8e6d705f2c2b698822",
	                "LowerDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a2880f7a1df08d007999985bfc780ed0556bf0fcdc5f02fa39b32b813504a31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-475995",
	                "Source": "/var/lib/docker/volumes/addons-475995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-475995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-475995",
	                "name.minikube.sigs.k8s.io": "addons-475995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb9467cbc7b9f95302d79c8838782d19ddb3e500cfde6d9573a8d192715689e5",
	            "SandboxKey": "/var/run/docker/netns/cb9467cbc7b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-475995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:b2:f8:63:69:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9b1c98f265a8051e0e74890fc7977c69249b8bf87efb30cbeba9f5fa2e7d626c",
	                    "EndpointID": "c8306d5d332d593c0db051f20a6481e8dfc88e1608b1793055dd543e06878553",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-475995",
	                        "231e1e8ad0cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-475995 -n addons-475995
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-475995 logs -n 25: (1.140897494s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-556430 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-556430   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-556430                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-556430   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-894917 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-894917   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-894917                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-894917   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-556430                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-556430   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-894917                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-894917   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-298854 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-298854 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-298854                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-298854 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-499929 --alsologtostderr --binary-mirror http://127.0.0.1:44063 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-499929   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-499929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-499929   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-475995                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-475995          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-475995                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-475995          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-475995 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-475995          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-475995 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-475995          │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-475995 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-475995          │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-475995 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-475995          │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:32.773146   10795 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:32.773376   10795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:32.773385   10795 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:32.773389   10795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:32.773610   10795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:29:32.774124   10795 out.go:368] Setting JSON to false
	I1025 08:29:32.774947   10795 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":721,"bootTime":1761380252,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:32.775029   10795 start.go:141] virtualization: kvm guest
	I1025 08:29:32.777170   10795 out.go:179] * [addons-475995] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:29:32.778756   10795 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:29:32.778754   10795 notify.go:220] Checking for updates...
	I1025 08:29:32.780083   10795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:32.781413   10795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:29:32.782658   10795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:29:32.783778   10795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:29:32.784906   10795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:29:32.786253   10795 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:32.810544   10795 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:29:32.810609   10795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:32.868386   10795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 08:29:32.856916468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:32.868537   10795 docker.go:318] overlay module found
	I1025 08:29:32.870316   10795 out.go:179] * Using the docker driver based on user configuration
	I1025 08:29:32.871566   10795 start.go:305] selected driver: docker
	I1025 08:29:32.871584   10795 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:32.871599   10795 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:29:32.872298   10795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:32.929342   10795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 08:29:32.919413351 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:32.929489   10795 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:32.929712   10795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:29:32.931524   10795 out.go:179] * Using Docker driver with root privileges
	I1025 08:29:32.932878   10795 cni.go:84] Creating CNI manager for ""
	I1025 08:29:32.932939   10795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:29:32.932949   10795 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:32.933000   10795 start.go:349] cluster config:
	{Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 08:29:32.934252   10795 out.go:179] * Starting "addons-475995" primary control-plane node in "addons-475995" cluster
	I1025 08:29:32.935399   10795 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:29:32.936631   10795 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:29:32.937765   10795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:32.937790   10795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:29:32.937809   10795 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:29:32.937818   10795 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:32.937909   10795 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 08:29:32.937923   10795 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:29:32.938260   10795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/config.json ...
	I1025 08:29:32.938286   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/config.json: {Name:mkfeb9e3f581fb26b967f776256af36385607ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:29:32.953807   10795 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:29:32.953900   10795 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:29:32.953915   10795 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:29:32.953920   10795 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:29:32.953929   10795 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:29:32.953936   10795 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 08:29:45.086495   10795 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 08:29:45.086537   10795 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:29:45.086576   10795 start.go:360] acquireMachinesLock for addons-475995: {Name:mk790996f547979aa305fcb4f65a603a5e244882 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:29:45.086690   10795 start.go:364] duration metric: took 94.93µs to acquireMachinesLock for "addons-475995"
	I1025 08:29:45.086714   10795 start.go:93] Provisioning new machine with config: &{Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:29:45.086776   10795 start.go:125] createHost starting for "" (driver="docker")
	I1025 08:29:45.088548   10795 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 08:29:45.088781   10795 start.go:159] libmachine.API.Create for "addons-475995" (driver="docker")
	I1025 08:29:45.088809   10795 client.go:168] LocalClient.Create starting
	I1025 08:29:45.088897   10795 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 08:29:45.239559   10795 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 08:29:45.369655   10795 cli_runner.go:164] Run: docker network inspect addons-475995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 08:29:45.386250   10795 cli_runner.go:211] docker network inspect addons-475995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 08:29:45.386308   10795 network_create.go:284] running [docker network inspect addons-475995] to gather additional debugging logs...
	I1025 08:29:45.386324   10795 cli_runner.go:164] Run: docker network inspect addons-475995
	W1025 08:29:45.401564   10795 cli_runner.go:211] docker network inspect addons-475995 returned with exit code 1
	I1025 08:29:45.401589   10795 network_create.go:287] error running [docker network inspect addons-475995]: docker network inspect addons-475995: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-475995 not found
	I1025 08:29:45.401600   10795 network_create.go:289] output of [docker network inspect addons-475995]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-475995 not found
	
	** /stderr **
	I1025 08:29:45.401700   10795 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:29:45.417980   10795 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c3d3a0}
	I1025 08:29:45.418033   10795 network_create.go:124] attempt to create docker network addons-475995 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 08:29:45.418072   10795 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-475995 addons-475995
	I1025 08:29:45.470683   10795 network_create.go:108] docker network addons-475995 192.168.49.0/24 created
	I1025 08:29:45.470712   10795 kic.go:121] calculated static IP "192.168.49.2" for the "addons-475995" container
	I1025 08:29:45.470776   10795 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 08:29:45.485995   10795 cli_runner.go:164] Run: docker volume create addons-475995 --label name.minikube.sigs.k8s.io=addons-475995 --label created_by.minikube.sigs.k8s.io=true
	I1025 08:29:45.502368   10795 oci.go:103] Successfully created a docker volume addons-475995
	I1025 08:29:45.502448   10795 cli_runner.go:164] Run: docker run --rm --name addons-475995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-475995 --entrypoint /usr/bin/test -v addons-475995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 08:29:52.128582   10795 cli_runner.go:217] Completed: docker run --rm --name addons-475995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-475995 --entrypoint /usr/bin/test -v addons-475995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.626092431s)
	I1025 08:29:52.128607   10795 oci.go:107] Successfully prepared a docker volume addons-475995
	I1025 08:29:52.128623   10795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:52.128654   10795 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 08:29:52.128722   10795 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-475995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 08:29:56.439151   10795 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-475995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.310380328s)
	I1025 08:29:56.439183   10795 kic.go:203] duration metric: took 4.310525152s to extract preloaded images to volume ...
	W1025 08:29:56.439284   10795 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 08:29:56.439324   10795 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 08:29:56.439365   10795 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 08:29:56.497582   10795 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-475995 --name addons-475995 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-475995 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-475995 --network addons-475995 --ip 192.168.49.2 --volume addons-475995:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 08:29:56.780495   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Running}}
	I1025 08:29:56.799591   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:29:56.818491   10795 cli_runner.go:164] Run: docker exec addons-475995 stat /var/lib/dpkg/alternatives/iptables
	I1025 08:29:56.863803   10795 oci.go:144] the created container "addons-475995" has a running status.
	I1025 08:29:56.863836   10795 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa...
	I1025 08:29:57.038968   10795 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 08:29:57.070162   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:29:57.092168   10795 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 08:29:57.092184   10795 kic_runner.go:114] Args: [docker exec --privileged addons-475995 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 08:29:57.138854   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:29:57.159854   10795 machine.go:93] provisionDockerMachine start ...
	I1025 08:29:57.159970   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:57.179192   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.179485   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:57.179498   10795 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:29:57.319606   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-475995
	
	I1025 08:29:57.319636   10795 ubuntu.go:182] provisioning hostname "addons-475995"
	I1025 08:29:57.319704   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:57.337574   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.337866   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:57.337887   10795 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-475995 && echo "addons-475995" | sudo tee /etc/hostname
	I1025 08:29:57.486381   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-475995
	
	I1025 08:29:57.486475   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:57.504393   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.504940   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:57.504975   10795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-475995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-475995/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-475995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:29:57.643988   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:29:57.644018   10795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 08:29:57.644041   10795 ubuntu.go:190] setting up certificates
	I1025 08:29:57.644053   10795 provision.go:84] configureAuth start
	I1025 08:29:57.644104   10795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-475995
	I1025 08:29:57.660617   10795 provision.go:143] copyHostCerts
	I1025 08:29:57.660707   10795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 08:29:57.660840   10795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 08:29:57.660927   10795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 08:29:57.660999   10795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.addons-475995 san=[127.0.0.1 192.168.49.2 addons-475995 localhost minikube]
	I1025 08:29:58.214345   10795 provision.go:177] copyRemoteCerts
	I1025 08:29:58.214398   10795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:29:58.214448   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.231580   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.329330   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:29:58.347036   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:29:58.362733   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 08:29:58.378269   10795 provision.go:87] duration metric: took 734.204044ms to configureAuth
	I1025 08:29:58.378297   10795 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:29:58.378465   10795 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:29:58.378574   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.395015   10795 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:58.395257   10795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:29:58.395282   10795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:29:58.634918   10795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:29:58.634942   10795 machine.go:96] duration metric: took 1.475060839s to provisionDockerMachine
	I1025 08:29:58.634954   10795 client.go:171] duration metric: took 13.546136728s to LocalClient.Create
	I1025 08:29:58.634976   10795 start.go:167] duration metric: took 13.546194737s to libmachine.API.Create "addons-475995"
	I1025 08:29:58.634985   10795 start.go:293] postStartSetup for "addons-475995" (driver="docker")
	I1025 08:29:58.634996   10795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:29:58.635065   10795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:29:58.635114   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.652101   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.751134   10795 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:29:58.754554   10795 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:29:58.754594   10795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:29:58.754606   10795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 08:29:58.754692   10795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 08:29:58.754726   10795 start.go:296] duration metric: took 119.734756ms for postStartSetup
	I1025 08:29:58.754989   10795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-475995
	I1025 08:29:58.772005   10795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/config.json ...
	I1025 08:29:58.772282   10795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:29:58.772329   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.789692   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.884492   10795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:29:58.888737   10795 start.go:128] duration metric: took 13.801947216s to createHost
	I1025 08:29:58.888758   10795 start.go:83] releasing machines lock for "addons-475995", held for 13.802055674s
	I1025 08:29:58.888807   10795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-475995
	I1025 08:29:58.905111   10795 ssh_runner.go:195] Run: cat /version.json
	I1025 08:29:58.905151   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.905198   10795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:29:58.905258   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:29:58.924458   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:58.924846   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:29:59.072759   10795 ssh_runner.go:195] Run: systemctl --version
	I1025 08:29:59.079039   10795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:29:59.111276   10795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:29:59.115572   10795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:29:59.115621   10795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:29:59.139451   10795 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 08:29:59.139476   10795 start.go:495] detecting cgroup driver to use...
	I1025 08:29:59.139501   10795 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 08:29:59.139550   10795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:29:59.154160   10795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:29:59.165287   10795 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:29:59.165349   10795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:29:59.180352   10795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:29:59.196023   10795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:29:59.274471   10795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:29:59.356885   10795 docker.go:234] disabling docker service ...
	I1025 08:29:59.356947   10795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:29:59.374404   10795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:29:59.386183   10795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:29:59.468690   10795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:29:59.547055   10795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:29:59.558873   10795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:29:59.571998   10795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:29:59.572060   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.581462   10795 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 08:29:59.581519   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.589714   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.597910   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.606192   10795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:29:59.613687   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.621977   10795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.634359   10795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:29:59.642291   10795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:29:59.649023   10795 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 08:29:59.649077   10795 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 08:29:59.660206   10795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:29:59.667133   10795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:29:59.741934   10795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:29:59.840274   10795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:29:59.840344   10795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:29:59.844055   10795 start.go:563] Will wait 60s for crictl version
	I1025 08:29:59.844119   10795 ssh_runner.go:195] Run: which crictl
	I1025 08:29:59.847327   10795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:29:59.870721   10795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:29:59.870819   10795 ssh_runner.go:195] Run: crio --version
	I1025 08:29:59.896525   10795 ssh_runner.go:195] Run: crio --version
	I1025 08:29:59.924605   10795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:29:59.925921   10795 cli_runner.go:164] Run: docker network inspect addons-475995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:29:59.942397   10795 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:29:59.946280   10795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:29:59.955984   10795 kubeadm.go:883] updating cluster {Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:29:59.956101   10795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:59.956146   10795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:29:59.985192   10795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:29:59.985209   10795 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:29:59.985253   10795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:00.009056   10795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:00.009077   10795 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:30:00.009084   10795 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 08:30:00.009163   10795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-475995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:30:00.009218   10795 ssh_runner.go:195] Run: crio config
	I1025 08:30:00.050940   10795 cni.go:84] Creating CNI manager for ""
	I1025 08:30:00.050965   10795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:00.050989   10795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:30:00.051019   10795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-475995 NodeName:addons-475995 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:30:00.051173   10795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-475995"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:30:00.051246   10795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:30:00.059196   10795 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:30:00.059256   10795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:30:00.066481   10795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 08:30:00.078044   10795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:30:00.091945   10795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 08:30:00.103152   10795 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:30:00.106308   10795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:00.115358   10795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:00.192219   10795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:00.218714   10795 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995 for IP: 192.168.49.2
	I1025 08:30:00.218734   10795 certs.go:195] generating shared ca certs ...
	I1025 08:30:00.218748   10795 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.218885   10795 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 08:30:00.435116   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt ...
	I1025 08:30:00.435147   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt: {Name:mkcb9fce405d7437ce47d5dbf66cddac56bf3772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.435338   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key ...
	I1025 08:30:00.435357   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key: {Name:mk921cbceda1cabf580f4626210826663b159287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.435471   10795 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 08:30:00.785303   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt ...
	I1025 08:30:00.785333   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt: {Name:mk5a1bfd48d2578a0ad435965ac442fbc17cdb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.785527   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key ...
	I1025 08:30:00.785545   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key: {Name:mkbfa3033bd1239fa1892508d295e32f295ca57b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:00.785657   10795 certs.go:257] generating profile certs ...
	I1025 08:30:00.785734   10795 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.key
	I1025 08:30:00.785755   10795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt with IP's: []
	I1025 08:30:01.162628   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt ...
	I1025 08:30:01.162666   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: {Name:mkb84c23f8d49a5a2b7fb68a257fbe3748a01896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.162841   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.key ...
	I1025 08:30:01.162852   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.key: {Name:mk0a0047f2d7e7599a1f88676c3b8af147a29cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.162919   10795 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d
	I1025 08:30:01.162937   10795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 08:30:01.332193   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d ...
	I1025 08:30:01.332221   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d: {Name:mk1282ef513e2075440591dae83dae6157fefdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.332376   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d ...
	I1025 08:30:01.332389   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d: {Name:mkb50ef1bbc2e488e5ad3862947c4eb0d936e180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.332470   10795 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt.781d2a2d -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt
	I1025 08:30:01.332562   10795 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key.781d2a2d -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key
	I1025 08:30:01.332618   10795 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key
	I1025 08:30:01.332636   10795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt with IP's: []
	I1025 08:30:01.444960   10795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt ...
	I1025 08:30:01.444988   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt: {Name:mk345660ad2cca55310cfaa84ac51e8d8f94bef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.445138   10795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key ...
	I1025 08:30:01.445148   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key: {Name:mkabd8770e827fc65dc5a90a8ac98e79d7dd057d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:01.445325   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 08:30:01.445358   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 08:30:01.445380   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:30:01.445405   10795 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 08:30:01.446006   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:30:01.463167   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:30:01.479237   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:30:01.495099   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 08:30:01.511328   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:30:01.527209   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 08:30:01.543028   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:30:01.558850   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:30:01.574770   10795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:30:01.592892   10795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:30:01.604439   10795 ssh_runner.go:195] Run: openssl version
	I1025 08:30:01.610131   10795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:30:01.619928   10795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:01.623252   10795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:01.623290   10795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:01.656851   10795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:30:01.665067   10795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:30:01.668442   10795 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:30:01.668498   10795 kubeadm.go:400] StartCluster: {Name:addons-475995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-475995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:30:01.668571   10795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:30:01.668609   10795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:30:01.694312   10795 cri.go:89] found id: ""
	I1025 08:30:01.694379   10795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:30:01.701936   10795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:30:01.709196   10795 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 08:30:01.709254   10795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:30:01.716400   10795 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:30:01.716414   10795 kubeadm.go:157] found existing configuration files:
	
	I1025 08:30:01.716460   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:30:01.723445   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:30:01.723499   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:30:01.730284   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:30:01.737196   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:30:01.737237   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:30:01.743949   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:30:01.750721   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:30:01.750776   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:30:01.757585   10795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:30:01.764920   10795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:30:01.764964   10795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:30:01.772255   10795 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 08:30:01.807965   10795 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:30:01.808021   10795 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:30:01.827239   10795 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 08:30:01.827336   10795 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 08:30:01.827418   10795 kubeadm.go:318] OS: Linux
	I1025 08:30:01.827506   10795 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 08:30:01.827596   10795 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 08:30:01.827686   10795 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 08:30:01.827759   10795 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 08:30:01.827838   10795 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 08:30:01.827915   10795 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 08:30:01.827996   10795 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 08:30:01.828059   10795 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 08:30:01.880245   10795 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:30:01.880422   10795 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:30:01.880562   10795 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:30:01.888303   10795 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:30:01.890315   10795 out.go:252]   - Generating certificates and keys ...
	I1025 08:30:01.890428   10795 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:30:01.890527   10795 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:30:02.134040   10795 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:30:02.312321   10795 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:30:02.609527   10795 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:30:03.118501   10795 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:30:03.161005   10795 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:30:03.161216   10795 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-475995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:03.575306   10795 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:30:03.575450   10795 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-475995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:04.280791   10795 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:30:04.414694   10795 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:30:04.679354   10795 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:30:04.679416   10795 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:30:05.233758   10795 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:30:05.541937   10795 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:30:05.792508   10795 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:30:06.203342   10795 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:30:06.549912   10795 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:30:06.550406   10795 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:30:06.554084   10795 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:30:06.555867   10795 out.go:252]   - Booting up control plane ...
	I1025 08:30:06.556022   10795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:30:06.556119   10795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:30:06.556181   10795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:30:06.569270   10795 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:30:06.569418   10795 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:30:06.575595   10795 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:30:06.575868   10795 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:30:06.575919   10795 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:30:06.673682   10795 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:30:06.673836   10795 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:30:07.674633   10795 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001140394s
	I1025 08:30:07.678179   10795 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:30:07.678348   10795 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 08:30:07.678494   10795 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:30:07.678616   10795 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:30:08.773235   10795 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.094959701s
	I1025 08:30:09.798947   10795 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.120645761s
	I1025 08:30:11.679810   10795 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00152719s
	I1025 08:30:11.690243   10795 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:30:11.699778   10795 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:30:11.708569   10795 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:30:11.708888   10795 kubeadm.go:318] [mark-control-plane] Marking the node addons-475995 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:30:11.716051   10795 kubeadm.go:318] [bootstrap-token] Using token: nbs337.bo63fhl08q3plpyx
	I1025 08:30:11.717485   10795 out.go:252]   - Configuring RBAC rules ...
	I1025 08:30:11.717605   10795 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:30:11.721130   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:30:11.725836   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:30:11.728022   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:30:11.730293   10795 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:30:11.733147   10795 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 08:30:12.085043   10795 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 08:30:12.498518   10795 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 08:30:13.085051   10795 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 08:30:13.086102   10795 kubeadm.go:318] 
	I1025 08:30:13.086191   10795 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 08:30:13.086202   10795 kubeadm.go:318] 
	I1025 08:30:13.086315   10795 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 08:30:13.086323   10795 kubeadm.go:318] 
	I1025 08:30:13.086355   10795 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 08:30:13.086456   10795 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 08:30:13.086556   10795 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 08:30:13.086576   10795 kubeadm.go:318] 
	I1025 08:30:13.086675   10795 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 08:30:13.086686   10795 kubeadm.go:318] 
	I1025 08:30:13.086760   10795 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 08:30:13.086768   10795 kubeadm.go:318] 
	I1025 08:30:13.086844   10795 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 08:30:13.086949   10795 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 08:30:13.087065   10795 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 08:30:13.087082   10795 kubeadm.go:318] 
	I1025 08:30:13.087187   10795 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 08:30:13.087293   10795 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 08:30:13.087303   10795 kubeadm.go:318] 
	I1025 08:30:13.087430   10795 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token nbs337.bo63fhl08q3plpyx \
	I1025 08:30:13.087573   10795 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 08:30:13.087610   10795 kubeadm.go:318] 	--control-plane 
	I1025 08:30:13.087616   10795 kubeadm.go:318] 
	I1025 08:30:13.087752   10795 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 08:30:13.087763   10795 kubeadm.go:318] 
	I1025 08:30:13.087872   10795 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token nbs337.bo63fhl08q3plpyx \
	I1025 08:30:13.088025   10795 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 08:30:13.089453   10795 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 08:30:13.089616   10795 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 08:30:13.089662   10795 cni.go:84] Creating CNI manager for ""
	I1025 08:30:13.089675   10795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:13.091372   10795 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 08:30:13.092433   10795 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 08:30:13.096532   10795 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 08:30:13.096548   10795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 08:30:13.108772   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 08:30:13.323724   10795 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:30:13.323770   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:13.323891   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-475995 minikube.k8s.io/updated_at=2025_10_25T08_30_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=addons-475995 minikube.k8s.io/primary=true
	I1025 08:30:13.393513   10795 ops.go:34] apiserver oom_adj: -16
	I1025 08:30:13.393607   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:13.894389   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:14.394782   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:14.894570   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:15.393873   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:15.893906   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:16.393891   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:16.893724   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:17.393958   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:17.893768   10795 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:17.956002   10795 kubeadm.go:1113] duration metric: took 4.632279358s to wait for elevateKubeSystemPrivileges
	I1025 08:30:17.956038   10795 kubeadm.go:402] duration metric: took 16.287543339s to StartCluster
	I1025 08:30:17.956061   10795 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:17.956168   10795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:30:17.956535   10795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:17.956741   10795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 08:30:17.956780   10795 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:17.956825   10795 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 08:30:17.956975   10795 addons.go:69] Setting yakd=true in profile "addons-475995"
	I1025 08:30:17.956992   10795 addons.go:69] Setting default-storageclass=true in profile "addons-475995"
	I1025 08:30:17.957035   10795 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:17.957081   10795 addons.go:238] Setting addon yakd=true in "addons-475995"
	I1025 08:30:17.957094   10795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-475995"
	I1025 08:30:17.957112   10795 addons.go:69] Setting registry-creds=true in profile "addons-475995"
	I1025 08:30:17.957098   10795 addons.go:69] Setting gcp-auth=true in profile "addons-475995"
	I1025 08:30:17.957109   10795 addons.go:69] Setting ingress=true in profile "addons-475995"
	I1025 08:30:17.957105   10795 addons.go:69] Setting ingress-dns=true in profile "addons-475995"
	I1025 08:30:17.957139   10795 addons.go:238] Setting addon registry-creds=true in "addons-475995"
	I1025 08:30:17.957147   10795 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-475995"
	I1025 08:30:17.957164   10795 mustload.go:65] Loading cluster: addons-475995
	I1025 08:30:17.957170   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957172   10795 addons.go:69] Setting volcano=true in profile "addons-475995"
	I1025 08:30:17.957176   10795 addons.go:238] Setting addon ingress-dns=true in "addons-475995"
	I1025 08:30:17.957180   10795 addons.go:69] Setting metrics-server=true in profile "addons-475995"
	I1025 08:30:17.957196   10795 addons.go:238] Setting addon metrics-server=true in "addons-475995"
	I1025 08:30:17.957200   10795 addons.go:238] Setting addon volcano=true in "addons-475995"
	I1025 08:30:17.957217   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957221   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957238   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957124   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957345   10795 addons.go:238] Setting addon ingress=true in "addons-475995"
	I1025 08:30:17.957401   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.957415   10795 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:17.957533   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957698   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957713   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957765   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957770   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957787   10795 addons.go:69] Setting inspektor-gadget=true in profile "addons-475995"
	I1025 08:30:17.957800   10795 addons.go:238] Setting addon inspektor-gadget=true in "addons-475995"
	I1025 08:30:17.957821   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.958085   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.958259   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.958916   10795 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-475995"
	I1025 08:30:17.958940   10795 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-475995"
	I1025 08:30:17.959182   10795 addons.go:69] Setting volumesnapshots=true in profile "addons-475995"
	I1025 08:30:17.959196   10795 addons.go:238] Setting addon volumesnapshots=true in "addons-475995"
	I1025 08:30:17.959220   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.959710   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.959965   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.960009   10795 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-475995"
	I1025 08:30:17.960073   10795 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-475995"
	I1025 08:30:17.960092   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.960517   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957164   10795 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-475995"
	I1025 08:30:17.960770   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.960796   10795 out.go:179] * Verifying Kubernetes components...
	I1025 08:30:17.960989   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957768   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.957138   10795 addons.go:69] Setting storage-provisioner=true in profile "addons-475995"
	I1025 08:30:17.961431   10795 addons.go:238] Setting addon storage-provisioner=true in "addons-475995"
	I1025 08:30:17.961467   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.961972   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.959980   10795 addons.go:69] Setting cloud-spanner=true in profile "addons-475995"
	I1025 08:30:17.963790   10795 addons.go:238] Setting addon cloud-spanner=true in "addons-475995"
	I1025 08:30:17.963831   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.959987   10795 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-475995"
	I1025 08:30:17.963911   10795 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-475995"
	I1025 08:30:17.963938   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.964304   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.964428   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.960002   10795 addons.go:69] Setting registry=true in profile "addons-475995"
	I1025 08:30:17.964540   10795 addons.go:238] Setting addon registry=true in "addons-475995"
	I1025 08:30:17.964568   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:17.965058   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:17.965778   10795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:17.972083   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:18.027489   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:18.033914   10795 addons.go:238] Setting addon default-storageclass=true in "addons-475995"
	I1025 08:30:18.033957   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:18.034411   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	W1025 08:30:18.034635   10795 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 08:30:18.039156   10795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:30:18.039324   10795 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 08:30:18.040438   10795 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 08:30:18.041520   10795 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:18.041534   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:30:18.041584   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.041896   10795 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:18.041920   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 08:30:18.041931   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 08:30:18.041953   10795 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 08:30:18.041978   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.041999   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.054080   10795 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 08:30:18.058405   10795 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:18.058439   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 08:30:18.058514   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.066464   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 08:30:18.068705   10795 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 08:30:18.069442   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 08:30:18.069468   10795 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 08:30:18.069532   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.070878   10795 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:18.070902   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 08:30:18.070966   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.078878   10795 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-475995"
	I1025 08:30:18.078925   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:18.079389   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:18.081630   10795 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 08:30:18.081944   10795 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 08:30:18.081773   10795 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 08:30:18.083255   10795 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:18.083294   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 08:30:18.083260   10795 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 08:30:18.083363   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.083388   10795 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 08:30:18.083772   10795 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 08:30:18.083862   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.083920   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 08:30:18.083982   10795 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 08:30:18.084032   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.086855   10795 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 08:30:18.087958   10795 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 08:30:18.088019   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 08:30:18.088099   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.096633   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 08:30:18.097010   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:18.099198   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 08:30:18.100277   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:18.100436   10795 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1025 08:30:18.106647   10795 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:18.106669   10795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:30:18.106723   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.107336   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 08:30:18.107534   10795 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:18.107552   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 08:30:18.107596   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.107783   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.108490   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 08:30:18.109465   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 08:30:18.109727   10795 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:18.109743   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 08:30:18.109788   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.113703   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.113710   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.115169   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 08:30:18.119693   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 08:30:18.120968   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 08:30:18.122071   10795 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 08:30:18.124194   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.124442   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 08:30:18.124458   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 08:30:18.124530   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.129841   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.133652   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.147448   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.149192   10795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 08:30:18.163267   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.167623   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.167631   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.169273   10795 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 08:30:18.170057   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.171358   10795 out.go:179]   - Using image docker.io/busybox:stable
	I1025 08:30:18.172412   10795 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:18.172435   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 08:30:18.172497   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:18.173792   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.178321   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	W1025 08:30:18.179211   10795 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 08:30:18.179234   10795 retry.go:31] will retry after 350.881426ms: ssh: handshake failed: EOF
	W1025 08:30:18.179318   10795 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 08:30:18.179326   10795 retry.go:31] will retry after 149.768313ms: ssh: handshake failed: EOF
	I1025 08:30:18.198561   10795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:18.202724   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	W1025 08:30:18.204146   10795 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 08:30:18.204171   10795 retry.go:31] will retry after 136.609188ms: ssh: handshake failed: EOF
	I1025 08:30:18.212610   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:18.300254   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:18.325147   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:18.327928   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 08:30:18.327952   10795 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 08:30:18.334884   10795 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 08:30:18.334914   10795 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 08:30:18.341983   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:18.347072   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:18.353241   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:18.377940   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:18.378464   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 08:30:18.378489   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 08:30:18.382770   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 08:30:18.382800   10795 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 08:30:18.392057   10795 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:18.392076   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 08:30:18.392370   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:18.394469   10795 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 08:30:18.394533   10795 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 08:30:18.394870   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:18.408141   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 08:30:18.408163   10795 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 08:30:18.422706   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 08:30:18.422728   10795 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 08:30:18.430409   10795 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 08:30:18.430544   10795 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 08:30:18.460620   10795 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:18.460673   10795 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 08:30:18.461783   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:18.487811   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 08:30:18.487833   10795 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 08:30:18.494207   10795 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:18.494228   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 08:30:18.501280   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:18.525299   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 08:30:18.525400   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 08:30:18.525562   10795 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 08:30:18.526817   10795 node_ready.go:35] waiting up to 6m0s for node "addons-475995" to be "Ready" ...
	I1025 08:30:18.543078   10795 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:18.543154   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 08:30:18.559182   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 08:30:18.559309   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 08:30:18.570139   10795 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 08:30:18.570162   10795 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 08:30:18.573658   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:18.617322   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 08:30:18.617351   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 08:30:18.620962   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:18.634297   10795 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:18.634382   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 08:30:18.665596   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 08:30:18.665627   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 08:30:18.679930   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:18.714875   10795 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 08:30:18.714983   10795 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 08:30:18.726162   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:18.774617   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 08:30:18.774656   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 08:30:18.821676   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 08:30:18.821769   10795 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 08:30:18.864840   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 08:30:18.865720   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 08:30:18.902470   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 08:30:18.902490   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 08:30:18.945843   10795 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:18.945873   10795 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 08:30:18.975581   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:19.032412   10795 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-475995" context rescaled to 1 replicas
	W1025 08:30:19.259874   10795 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 08:30:19.327944   10795 addons.go:479] Verifying addon metrics-server=true in "addons-475995"
	W1025 08:30:19.328285   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:19.328409   10795 retry.go:31] will retry after 133.361273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:19.345795   10795 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-475995 service yakd-dashboard -n yakd-dashboard
	
	I1025 08:30:19.462725   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:19.940325   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.319281306s)
	I1025 08:30:19.940363   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.260323665s)
	W1025 08:30:19.940382   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:30:19.940397   10795 addons.go:479] Verifying addon registry=true in "addons-475995"
	I1025 08:30:19.940413   10795 retry.go:31] will retry after 335.913198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:30:19.940499   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.21424676s)
	I1025 08:30:19.940525   10795 addons.go:479] Verifying addon ingress=true in "addons-475995"
	I1025 08:30:19.940854   10795 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-475995"
	I1025 08:30:19.941951   10795 out.go:179] * Verifying ingress addon...
	I1025 08:30:19.941959   10795 out.go:179] * Verifying registry addon...
	I1025 08:30:19.941999   10795 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 08:30:19.945044   10795 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 08:30:19.945055   10795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 08:30:19.945196   10795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 08:30:19.948983   10795 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:30:19.949004   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:19.949056   10795 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:30:19.949069   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:19.949262   10795 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 08:30:19.949273   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:20.136246   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:20.136279   10795 retry.go:31] will retry after 297.996815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:20.276806   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:20.434508   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:20.448226   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:20.448300   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:20.448452   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:20.529370   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:20.948481   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:20.948599   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:20.948627   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:21.448461   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:21.448673   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:21.448723   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:21.948320   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:21.948338   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:21.948406   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:22.447672   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:22.447744   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:22.447814   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:22.530024   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:22.758009   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.481141905s)
	I1025 08:30:22.758056   10795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.323506837s)
	W1025 08:30:22.758091   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:22.758110   10795 retry.go:31] will retry after 572.443818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:22.948794   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:22.948813   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:22.948940   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:23.331042   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:23.447782   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:23.448003   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:23.448038   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:23.870086   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:23.870113   10795 retry.go:31] will retry after 1.201376868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:23.948127   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:23.948146   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:23.948197   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:24.448349   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:24.448426   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:24.448454   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:24.948561   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:24.948574   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:24.948726   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:25.030094   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:25.071610   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:25.448359   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:25.448576   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:25.448601   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:25.612448   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:25.612484   10795 retry.go:31] will retry after 1.715566176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:25.642547   10795 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 08:30:25.642609   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:25.660623   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:25.766429   10795 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 08:30:25.779599   10795 addons.go:238] Setting addon gcp-auth=true in "addons-475995"
	I1025 08:30:25.779669   10795 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:30:25.780028   10795 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:30:25.797680   10795 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 08:30:25.797732   10795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:30:25.815536   10795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:30:25.913694   10795 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:25.915060   10795 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 08:30:25.916183   10795 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 08:30:25.916199   10795 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 08:30:25.930032   10795 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 08:30:25.930055   10795 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 08:30:25.942955   10795 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:30:25.942975   10795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 08:30:25.949025   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:25.949029   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:25.949062   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:25.955785   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:30:26.255192   10795 addons.go:479] Verifying addon gcp-auth=true in "addons-475995"
	I1025 08:30:26.256777   10795 out.go:179] * Verifying gcp-auth addon...
	I1025 08:30:26.258456   10795 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 08:30:26.260617   10795 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 08:30:26.260634   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:26.448559   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:26.448598   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:26.448766   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:26.761861   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:26.948697   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:26.948825   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:26.948835   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:27.262154   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:27.328209   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:27.448767   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:27.448816   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:27.448846   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:27.530366   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:27.761954   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:30:27.854344   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:27.854373   10795 retry.go:31] will retry after 1.553262038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:27.948354   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:27.948451   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:27.948476   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:28.261176   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:28.448075   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:28.448178   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:28.448226   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:28.761989   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:28.948654   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:28.948779   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:28.948829   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:29.261821   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:29.408038   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:29.448547   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:29.448692   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:29.448707   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:29.760829   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:30:29.941684   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:29.941716   10795 retry.go:31] will retry after 2.068473842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:29.948189   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:29.948302   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:29.948444   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:30.030065   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:30.260869   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:30.448964   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:30.448979   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:30.449096   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:30.761933   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:30.948668   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:30.948791   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:30.948940   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:31.261352   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:31.448458   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:31.448509   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:31.448513   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:31.761697   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:31.948161   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:31.948275   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:31.948436   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:32.010496   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:32.261075   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:32.448216   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:32.448332   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:32.448387   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:32.529958   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	W1025 08:30:32.553149   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:32.553185   10795 retry.go:31] will retry after 5.18951034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:32.762318   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:32.948541   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:32.948554   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:32.948687   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:33.261584   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:33.448801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:33.448801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:33.448993   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:33.761039   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:33.947672   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:33.947754   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:33.947914   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:34.261767   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:34.448512   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:34.448510   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:34.448580   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:34.530036   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:34.761790   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:34.948503   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:34.948709   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:34.948731   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:35.261916   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:35.449016   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:35.449161   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:35.449296   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:35.761824   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:35.948463   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:35.948494   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:35.948527   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:36.261126   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:36.448119   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:36.448134   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:36.448118   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.761585   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:36.950378   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.950420   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:36.950485   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:37.029849   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:37.261437   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:37.448567   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:37.448595   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:37.448750   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:37.743546   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:37.761527   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:37.948697   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:37.948711   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:37.948790   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:38.261759   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:30:38.275914   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:38.275949   10795 retry.go:31] will retry after 3.925212953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:38.448746   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:38.448736   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.448834   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:38.761801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:38.948051   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.948186   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:38.948311   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:39.261399   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:39.448268   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:39.448321   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:39.448369   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:39.529746   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:39.761377   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:39.948006   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:39.948050   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:39.948179   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:40.261333   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:40.447879   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:40.447893   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:40.447893   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:40.762378   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:40.947987   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:40.948036   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:40.948171   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:41.260782   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:41.448803   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:41.448826   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:41.448899   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:41.761994   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:41.947476   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:41.947559   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:41.947560   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:42.029950   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:42.202176   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:42.261175   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:42.447767   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:42.447788   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:42.447779   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:42.748574   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:42.748604   10795 retry.go:31] will retry after 13.216673318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:42.761750   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:42.948720   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:42.948792   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:42.948791   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:43.261723   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.448531   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:43.448685   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:43.448726   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:43.761777   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.948494   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:43.948597   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:43.948615   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:44.030011   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:44.261732   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:44.448443   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.448487   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:44.448541   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:44.761803   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:44.948666   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.948666   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:44.948865   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:45.261996   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:45.448420   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:45.448528   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:45.448570   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:45.761629   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:45.948280   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:45.948420   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:45.948481   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:46.261304   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.447817   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:46.447825   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:46.448005   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:46.529428   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:46.760990   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.948347   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:46.948365   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:46.948522   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:47.261712   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:47.448334   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:47.448334   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:47.448368   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.761894   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:47.947525   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.947576   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:47.947663   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:48.261794   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.448534   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:48.448601   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:48.448730   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:48.530059   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:48.761831   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.948429   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:48.948444   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:48.948610   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.261302   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:49.447885   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:49.447918   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:49.448138   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.761731   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:49.948215   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:49.948466   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.948474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.261761   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.448628   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:50.448635   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.448664   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:50.530233   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:50.762364   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.948411   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:50.948411   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.948516   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.261529   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:51.448452   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.448461   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:51.448557   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:51.760942   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:51.947851   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.947873   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:51.947880   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.261654   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:52.448474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:52.448526   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.448554   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:52.761706   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:52.949044   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.949080   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:52.949243   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:53.029987   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:53.261705   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:53.448600   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:53.448621   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:53.448740   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:53.762070   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:53.947696   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:53.947797   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:53.947956   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:54.261811   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:54.448490   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:54.448614   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:54.448732   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:54.761949   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:54.948801   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:54.948879   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:54.948992   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:55.030209   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:55.261926   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.448548   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:55.448700   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:55.448755   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:55.760992   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.948667   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:55.948853   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:55.948963   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:55.965899   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.261932   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:56.448564   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:56.448573   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.448605   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:56.497037   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:56.497070   10795 retry.go:31] will retry after 15.552811184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:56.761706   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:56.948298   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:56.948307   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.948534   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.261962   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:57.448694   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:57.448850   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:57.448856   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:57.529964   10795 node_ready.go:57] node "addons-475995" has "Ready":"False" status (will retry)
	I1025 08:30:57.761689   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:57.948395   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:57.948539   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.948566   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:58.261974   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:58.448795   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:58.448894   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:58.448897   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:58.762007   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:58.949330   10795 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:30:58.949358   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:58.949533   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:58.949553   10795 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:30:58.949568   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:59.029776   10795 node_ready.go:49] node "addons-475995" is "Ready"
	I1025 08:30:59.029817   10795 node_ready.go:38] duration metric: took 40.502972606s for node "addons-475995" to be "Ready" ...
	I1025 08:30:59.029834   10795 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:30:59.029893   10795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:30:59.046537   10795 api_server.go:72] duration metric: took 41.089726877s to wait for apiserver process to appear ...
	I1025 08:30:59.046567   10795 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:30:59.046592   10795 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 08:30:59.051531   10795 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 08:30:59.052504   10795 api_server.go:141] control plane version: v1.34.1
	I1025 08:30:59.052529   10795 api_server.go:131] duration metric: took 5.955457ms to wait for apiserver health ...
	I1025 08:30:59.052537   10795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:30:59.055633   10795 system_pods.go:59] 20 kube-system pods found
	I1025 08:30:59.055683   10795 system_pods.go:61] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending
	I1025 08:30:59.055693   10795 system_pods.go:61] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:30:59.055700   10795 system_pods.go:61] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.055707   10795 system_pods.go:61] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.055711   10795 system_pods.go:61] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending
	I1025 08:30:59.055719   10795 system_pods.go:61] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.055723   10795 system_pods.go:61] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.055726   10795 system_pods.go:61] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.055730   10795 system_pods.go:61] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.055736   10795 system_pods.go:61] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.055740   10795 system_pods.go:61] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.055743   10795 system_pods.go:61] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.055748   10795 system_pods.go:61] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.055757   10795 system_pods.go:61] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.055762   10795 system_pods.go:61] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.055768   10795 system_pods.go:61] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.055773   10795 system_pods.go:61] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.055790   10795 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.055798   10795 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending
	I1025 08:30:59.055803   10795 system_pods.go:61] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:59.055808   10795 system_pods.go:74] duration metric: took 3.265934ms to wait for pod list to return data ...
	I1025 08:30:59.055815   10795 default_sa.go:34] waiting for default service account to be created ...
	I1025 08:30:59.059157   10795 default_sa.go:45] found service account: "default"
	I1025 08:30:59.059180   10795 default_sa.go:55] duration metric: took 3.359054ms for default service account to be created ...
	I1025 08:30:59.059191   10795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:30:59.062520   10795 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:59.062546   10795 system_pods.go:89] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending
	I1025 08:30:59.062554   10795 system_pods.go:89] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:30:59.062559   10795 system_pods.go:89] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.062567   10795 system_pods.go:89] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.062570   10795 system_pods.go:89] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending
	I1025 08:30:59.062574   10795 system_pods.go:89] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.062578   10795 system_pods.go:89] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.062585   10795 system_pods.go:89] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.062590   10795 system_pods.go:89] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.062598   10795 system_pods.go:89] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.062602   10795 system_pods.go:89] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.062608   10795 system_pods.go:89] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.062613   10795 system_pods.go:89] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.062621   10795 system_pods.go:89] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.062629   10795 system_pods.go:89] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.062636   10795 system_pods.go:89] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.062661   10795 system_pods.go:89] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.062669   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.062676   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending
	I1025 08:30:59.062685   10795 system_pods.go:89] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:59.062698   10795 retry.go:31] will retry after 276.863432ms: missing components: kube-dns
	I1025 08:30:59.264153   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.364941   10795 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:59.364981   10795 system_pods.go:89] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:30:59.364991   10795 system_pods.go:89] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:30:59.365003   10795 system_pods.go:89] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.365013   10795 system_pods.go:89] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.365022   10795 system_pods.go:89] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:30:59.365037   10795 system_pods.go:89] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.365048   10795 system_pods.go:89] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.365055   10795 system_pods.go:89] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.365066   10795 system_pods.go:89] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.365075   10795 system_pods.go:89] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.365084   10795 system_pods.go:89] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.365093   10795 system_pods.go:89] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.365104   10795 system_pods.go:89] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.365113   10795 system_pods.go:89] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.365124   10795 system_pods.go:89] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.365133   10795 system_pods.go:89] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.365144   10795 system_pods.go:89] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.365155   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.365172   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.365184   10795 system_pods.go:89] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:59.365207   10795 retry.go:31] will retry after 291.667738ms: missing components: kube-dns
	I1025 08:30:59.458777   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:59.459270   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:59.459656   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:59.660822   10795 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:59.660855   10795 system_pods.go:89] "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:30:59.660862   10795 system_pods.go:89] "coredns-66bc5c9577-8nfrz" [f5c379aa-406b-4310-a68b-6a82053bf8b2] Running
	I1025 08:30:59.660873   10795 system_pods.go:89] "csi-hostpath-attacher-0" [5892d6e4-96d1-4822-a12b-2159f862138e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:59.660879   10795 system_pods.go:89] "csi-hostpath-resizer-0" [6ba30da1-4978-4215-828b-50d222d8d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:59.660885   10795 system_pods.go:89] "csi-hostpathplugin-kswpf" [60b109a5-b18b-4763-a0a1-bda731a33296] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:30:59.660889   10795 system_pods.go:89] "etcd-addons-475995" [12dbda1d-2cd5-40de-a9f4-285211cbd6c0] Running
	I1025 08:30:59.660892   10795 system_pods.go:89] "kindnet-r5lvv" [f7808ccd-9aa3-4562-818f-662d73c14492] Running
	I1025 08:30:59.660899   10795 system_pods.go:89] "kube-apiserver-addons-475995" [e9635248-4fe0-43af-b86b-e1e54afbc816] Running
	I1025 08:30:59.660902   10795 system_pods.go:89] "kube-controller-manager-addons-475995" [d04654b0-ec91-4f99-be7c-f8ab3cd07034] Running
	I1025 08:30:59.660908   10795 system_pods.go:89] "kube-ingress-dns-minikube" [984b5858-dc1e-464c-bcd7-14c93276e897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:59.660913   10795 system_pods.go:89] "kube-proxy-4qm6g" [961cebca-e61a-4d8e-a07d-bebc721cdd0a] Running
	I1025 08:30:59.660922   10795 system_pods.go:89] "kube-scheduler-addons-475995" [c3c0a588-d909-4e32-9593-62aa2677e202] Running
	I1025 08:30:59.660930   10795 system_pods.go:89] "metrics-server-85b7d694d7-5wn89" [dfa2552c-3145-4aeb-9020-68741a561f26] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:59.660941   10795 system_pods.go:89] "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:59.660954   10795 system_pods.go:89] "registry-6b586f9694-pw542" [a651763e-0164-4d16-b5df-416458fbf8d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:59.660961   10795 system_pods.go:89] "registry-creds-764b6fb674-rq26r" [2efaa5a3-60c5-4bdf-95a9-a203d74287d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:59.660969   10795 system_pods.go:89] "registry-proxy-twv4t" [21eb7156-e697-4b86-bcee-d11e413607b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:59.660974   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8qx69" [5d818bf3-10f7-4cdb-9a45-dc6822f65f43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.660981   10795 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mcjmk" [c441cd3a-1a0e-4f41-82fa-b5cef6e25e58] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:59.660985   10795 system_pods.go:89] "storage-provisioner" [f8ecda33-fe42-4850-8cab-46d48640b6a0] Running
	I1025 08:30:59.660994   10795 system_pods.go:126] duration metric: took 601.798324ms to wait for k8s-apps to be running ...
	I1025 08:30:59.661005   10795 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:30:59.661060   10795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:30:59.674653   10795 system_svc.go:56] duration metric: took 13.626415ms WaitForService to wait for kubelet
	I1025 08:30:59.674688   10795 kubeadm.go:586] duration metric: took 41.717881334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:30:59.674712   10795 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:30:59.677206   10795 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 08:30:59.677235   10795 node_conditions.go:123] node cpu capacity is 8
	I1025 08:30:59.677250   10795 node_conditions.go:105] duration metric: took 2.527783ms to run NodePressure ...
	I1025 08:30:59.677264   10795 start.go:241] waiting for startup goroutines ...
	I1025 08:30:59.762112   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.947956   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:59.948167   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:59.948183   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.262393   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:00.451199   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.451503   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:00.451821   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.762078   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:00.949763   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.949774   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.950085   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.262202   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:01.448724   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.448910   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.449055   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:01.762030   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:01.949270   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.949355   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.949525   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.262326   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.448745   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.448991   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.449024   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.762220   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.948688   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.948941   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.948943   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.262570   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:03.449308   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.449699   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.449701   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.760937   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:03.948935   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.948987   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.949073   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.262225   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:04.448275   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.448299   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.448419   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.761014   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:04.949145   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.949326   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.949572   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.262294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.448558   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.448875   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.449038   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.761465   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.948372   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.948596   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.948687   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.262185   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.448111   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.448322   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.448324   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.761278   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.948602   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.948729   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.948772   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.261459   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.448848   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.448931   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.449002   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:07.761754   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.949632   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.950421   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.950737   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.262110   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.448450   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.448531   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.448573   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.762474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.948861   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.948916   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.948949   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.261722   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.449028   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.449174   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.449231   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.762142   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.948069   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.948367   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.948837   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.261823   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.449119   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.449183   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.449412   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.762381   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.949499   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.949631   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.949633   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.261328   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.449676   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.449721   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.449752   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.762159   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.948286   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.948321   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.948359   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.050135   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:12.261206   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.449407   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.449590   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.449718   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:12.674879   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:12.674920   10795 retry.go:31] will retry after 26.963689157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:12.762091   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.948699   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.948717   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.948889   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.262556   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.448571   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.448590   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.448784   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.761479   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.949262   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.949294   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.949378   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.262561   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.449290   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.449313   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.449291   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.762121   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.948612   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.948893   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.949035   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.261868   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.449169   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.449193   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.449304   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.762740   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.949087   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.949174   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.949268   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.261726   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.448467   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.448527   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.448553   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.878822   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.979604   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.979746   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.979797   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.261105   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.448004   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.448082   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.448294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.762143   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.948042   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.948125   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.948280   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.261292   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.448221   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.448314   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.448340   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.761754   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.948612   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.948796   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.948795   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.261444   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.448554   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.448677   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.448693   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.761594   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.948529   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.948676   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.948819   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.261714   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.448373   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.448516   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.448756   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.761188   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.948294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.948317   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.948376   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.261815   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.449212   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.449266   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.449492   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:21.762329   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.948349   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.948392   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.948449   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.261966   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.448967   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.449082   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.449156   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.761984   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.947969   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.948020   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.948124   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.261759   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.448918   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.448954   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.448964   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.762051   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.949294   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.949339   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.950255   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.262015   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.449284   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.449413   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.449423   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.761745   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.948833   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.948929   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.948954   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.261773   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.449087   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.449116   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.449124   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.761959   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.949284   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.949307   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.949474   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.261839   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.451837   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.452029   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.452149   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.762083   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.947674   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.948025   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.948046   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.263328   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.483566   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.484602   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.485898   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.761707   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.953395   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.953660   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.953810   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.262122   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.447929   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.447963   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.448220   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.763514   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.949340   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.949380   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.949548   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.261633   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.448803   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.448807   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.448882   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.762030   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.947964   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.947975   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.948016   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.262251   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.449245   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.449298   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.449304   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.763508   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.949188   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.949384   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.949401   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.263700   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.449152   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.449209   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.449403   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.762000   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.949091   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.949148   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.949326   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.262793   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.450023   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:32.450894   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.451683   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.762599   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.948989   10795 kapi.go:107] duration metric: took 1m13.003787796s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 08:31:32.949046   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.949071   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.262111   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.448206   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.448233   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.762503   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.948461   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.948597   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.261453   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.448617   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.448973   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.761506   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.948777   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.948854   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.261442   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.448748   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.448801   10795 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.762312   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.949104   10795 kapi.go:107] duration metric: took 1m16.004049712s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 08:31:35.949132   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.262203   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.448437   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.761222   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.948054   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.265031   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.450970   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.763730   10795 kapi.go:107] duration metric: took 1m11.505271253s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 08:31:37.765819   10795 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-475995 cluster.
	I1025 08:31:37.767230   10795 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 08:31:37.768458   10795 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 08:31:37.949188   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.448661   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.949064   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.448331   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.639451   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:39.949990   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:40.348837   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.348872   10795 retry.go:31] will retry after 25.783943494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.449331   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.948661   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.449415   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.948867   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.449514   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.948933   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.449354   10795 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.948900   10795 kapi.go:107] duration metric: took 1m24.003842816s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 08:32:06.135724   10795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 08:32:06.677419   10795 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 08:32:06.677528   10795 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 08:32:06.679630   10795 out.go:179] * Enabled addons: ingress-dns, storage-provisioner, registry-creds, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 08:32:06.680924   10795 addons.go:514] duration metric: took 1m48.724096193s for enable addons: enabled=[ingress-dns storage-provisioner registry-creds cloud-spanner amd-gpu-device-plugin nvidia-device-plugin default-storageclass metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 08:32:06.680967   10795 start.go:246] waiting for cluster config update ...
	I1025 08:32:06.680992   10795 start.go:255] writing updated cluster config ...
	I1025 08:32:06.681235   10795 ssh_runner.go:195] Run: rm -f paused
	I1025 08:32:06.685453   10795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:06.689073   10795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nfrz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.693338   10795 pod_ready.go:94] pod "coredns-66bc5c9577-8nfrz" is "Ready"
	I1025 08:32:06.693368   10795 pod_ready.go:86] duration metric: took 4.274014ms for pod "coredns-66bc5c9577-8nfrz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.695276   10795 pod_ready.go:83] waiting for pod "etcd-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.699042   10795 pod_ready.go:94] pod "etcd-addons-475995" is "Ready"
	I1025 08:32:06.699066   10795 pod_ready.go:86] duration metric: took 3.767509ms for pod "etcd-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.700842   10795 pod_ready.go:83] waiting for pod "kube-apiserver-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.704447   10795 pod_ready.go:94] pod "kube-apiserver-addons-475995" is "Ready"
	I1025 08:32:06.704472   10795 pod_ready.go:86] duration metric: took 3.609483ms for pod "kube-apiserver-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:06.706211   10795 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.088997   10795 pod_ready.go:94] pod "kube-controller-manager-addons-475995" is "Ready"
	I1025 08:32:07.089029   10795 pod_ready.go:86] duration metric: took 382.799332ms for pod "kube-controller-manager-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.289270   10795 pod_ready.go:83] waiting for pod "kube-proxy-4qm6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.688963   10795 pod_ready.go:94] pod "kube-proxy-4qm6g" is "Ready"
	I1025 08:32:07.688991   10795 pod_ready.go:86] duration metric: took 399.694794ms for pod "kube-proxy-4qm6g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:07.889449   10795 pod_ready.go:83] waiting for pod "kube-scheduler-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:08.289422   10795 pod_ready.go:94] pod "kube-scheduler-addons-475995" is "Ready"
	I1025 08:32:08.289482   10795 pod_ready.go:86] duration metric: took 399.935083ms for pod "kube-scheduler-addons-475995" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:08.289493   10795 pod_ready.go:40] duration metric: took 1.604013016s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:08.333893   10795 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 08:32:08.335829   10795 out.go:179] * Done! kubectl is now configured to use "addons-475995" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.214530106Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.897545093Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0ccd6845-16fb-4eb5-aeb8-a4ad991050d1 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.898144832Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bae7dd93-df33-4d99-974a-6ee7a16daedb name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.899437696Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6034ba07-9c08-4989-acc5-896324f612a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.902796703Z" level=info msg="Creating container: default/busybox/busybox" id=7bb4dc0f-ffa9-453a-9a08-babe8ca4c9e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.902900942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.909512465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.910546141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.941585413Z" level=info msg="Created container 5260d1e3f01aba177c72727d9e27d007d4ff0faac0043935b1db1ba7de646ec8: default/busybox/busybox" id=7bb4dc0f-ffa9-453a-9a08-babe8ca4c9e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.94224706Z" level=info msg="Starting container: 5260d1e3f01aba177c72727d9e27d007d4ff0faac0043935b1db1ba7de646ec8" id=0cecb77c-d537-4901-b3b6-9d9acc979bda name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 08:32:09 addons-475995 crio[766]: time="2025-10-25T08:32:09.944373188Z" level=info msg="Started container" PID=6552 containerID=5260d1e3f01aba177c72727d9e27d007d4ff0faac0043935b1db1ba7de646ec8 description=default/busybox/busybox id=0cecb77c-d537-4901-b3b6-9d9acc979bda name=/runtime.v1.RuntimeService/StartContainer sandboxID=86085a08c3178ec952b6607e9e383e5d6741e1d6ed62afdadbc81a70ebfd7952
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.294971143Z" level=info msg="Removing container: dfcff3953d3097cf494dec94ea0f08e081fc711f0e427e1f7cd662e8347ba746" id=a5c35b0a-b502-4570-8165-ded246c53131 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.301611076Z" level=info msg="Removed container dfcff3953d3097cf494dec94ea0f08e081fc711f0e427e1f7cd662e8347ba746: gcp-auth/gcp-auth-certs-create-xx8m2/create" id=a5c35b0a-b502-4570-8165-ded246c53131 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.303314194Z" level=info msg="Removing container: 73ff29340e790e916b7bf3419f1bff8f2164340344654f3422ea89642b6e4133" id=5232703f-251f-472c-acde-c4b18a880f8c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.309825528Z" level=info msg="Removed container 73ff29340e790e916b7bf3419f1bff8f2164340344654f3422ea89642b6e4133: gcp-auth/gcp-auth-certs-patch-hqngk/patch" id=5232703f-251f-472c-acde-c4b18a880f8c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.312360339Z" level=info msg="Stopping pod sandbox: add28938dfd9166c9532acad446bdb79d773e6c4b00852bc2b6d2970b80679e5" id=7f10eeb7-66ed-4c2c-937d-f11baad85d01 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.312415418Z" level=info msg="Stopped pod sandbox (already stopped): add28938dfd9166c9532acad446bdb79d773e6c4b00852bc2b6d2970b80679e5" id=7f10eeb7-66ed-4c2c-937d-f11baad85d01 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.312837522Z" level=info msg="Removing pod sandbox: add28938dfd9166c9532acad446bdb79d773e6c4b00852bc2b6d2970b80679e5" id=b8dafaf3-d84d-498f-a240-030667c531fa name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.316112958Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.316176925Z" level=info msg="Removed pod sandbox: add28938dfd9166c9532acad446bdb79d773e6c4b00852bc2b6d2970b80679e5" id=b8dafaf3-d84d-498f-a240-030667c531fa name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.316572695Z" level=info msg="Stopping pod sandbox: 4c45ee561d2a881a4e4f291952357754f75b1395b9b51d3911d66c10033a8cab" id=ab62e01e-4140-47f3-8eca-0c6cb4c5000d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.316609401Z" level=info msg="Stopped pod sandbox (already stopped): 4c45ee561d2a881a4e4f291952357754f75b1395b9b51d3911d66c10033a8cab" id=ab62e01e-4140-47f3-8eca-0c6cb4c5000d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.316876356Z" level=info msg="Removing pod sandbox: 4c45ee561d2a881a4e4f291952357754f75b1395b9b51d3911d66c10033a8cab" id=17edfc2c-a962-4521-8814-f09579a9a87b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.319678745Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 08:32:12 addons-475995 crio[766]: time="2025-10-25T08:32:12.319735106Z" level=info msg="Removed pod sandbox: 4c45ee561d2a881a4e4f291952357754f75b1395b9b51d3911d66c10033a8cab" id=17edfc2c-a962-4521-8814-f09579a9a87b name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	5260d1e3f01ab       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   86085a08c3178       busybox                                     default
	bab891b7af1f4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          34 seconds ago       Running             csi-snapshotter                          0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	22f2b9269ef02       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          35 seconds ago       Running             csi-provisioner                          0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	8de87df506db7       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            36 seconds ago       Running             liveness-probe                           0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	101a2932de347       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           37 seconds ago       Running             hostpath                                 0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	7f9bf3508d183       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                38 seconds ago       Running             node-driver-registrar                    0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	32a6ca3d206b0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            38 seconds ago       Running             gadget                                   0                   3d2754cfb52fa       gadget-n5ndm                                gadget
	1e80a58fe2589       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 41 seconds ago       Running             gcp-auth                                 0                   aac61d2cc9f95       gcp-auth-78565c9fb4-lch5j                   gcp-auth
	e83e5239a5fd8       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             42 seconds ago       Running             controller                               0                   5aeaa6cb706c6       ingress-nginx-controller-675c5ddd98-mdshg   ingress-nginx
	b23168cf49c8b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              46 seconds ago       Running             registry-proxy                           0                   60d3d07cc8953       registry-proxy-twv4t                        kube-system
	9ebf337144234       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     47 seconds ago       Running             nvidia-device-plugin-ctr                 0                   8775c1c70888e       nvidia-device-plugin-daemonset-lbh6g        kube-system
	e6efa48ea6a2f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   50 seconds ago       Running             csi-external-health-monitor-controller   0                   f7db7c90708f5       csi-hostpathplugin-kswpf                    kube-system
	2107300ec375f       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        51 seconds ago       Running             metrics-server                           0                   4872c0c4279a6       metrics-server-85b7d694d7-5wn89             kube-system
	74693a35fd3fc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     52 seconds ago       Running             amd-gpu-device-plugin                    0                   d93563f8b8bdf       amd-gpu-device-plugin-6mxn7                 kube-system
	7358a40adba97       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   a082dd61a282b       csi-hostpath-resizer-0                      kube-system
	2f476752a0079       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   67cfbbb3f470f       snapshot-controller-7d9fbc56b8-8qx69        kube-system
	956b214b91f1c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   7b5889d5da2f8       snapshot-controller-7d9fbc56b8-mcjmk        kube-system
	ecf62df96b889       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   f5c59d674a72b       csi-hostpath-attacher-0                     kube-system
	594922a23e3cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   72467b81df621       ingress-nginx-admission-patch-49wjr         ingress-nginx
	e1b1ef989389c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   4929cfc5d6339       ingress-nginx-admission-create-2j77z        ingress-nginx
	f9f537f8ebc4f       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   01eaed274722b       yakd-dashboard-5ff678cb9-2ntvm              yakd-dashboard
	5f2a1a6adc37e       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   66eaadae61133       cloud-spanner-emulator-86bd5cbb97-zlql6     default
	d30403917ed89       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   d099737d45cbb       registry-6b586f9694-pw542                   kube-system
	4f77508bbc9ac       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   5879c159ca06b       local-path-provisioner-648f6765c9-th4g2     local-path-storage
	09848150de892       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   7c520df262094       kube-ingress-dns-minikube                   kube-system
	02939bc11915d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   1f95254a870ce       coredns-66bc5c9577-8nfrz                    kube-system
	76b61de4dd3d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   ca7089c35a6e2       storage-provisioner                         kube-system
	ca5be89b6d548       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   6999cf831fc9b       kube-proxy-4qm6g                            kube-system
	19c714713a8d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   41e06c3365086       kindnet-r5lvv                               kube-system
	b8679170a4379       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   d0ef16387f954       kube-controller-manager-addons-475995       kube-system
	7ca23082c83a4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   7507d6f7d05c6       kube-apiserver-addons-475995                kube-system
	c092ee6bc7618       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   4c47d9eecbe27       kube-scheduler-addons-475995                kube-system
	8f6f29d5a814c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   c4545b16cee56       etcd-addons-475995                          kube-system
	
	
	==> coredns [02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc] <==
	[INFO] 10.244.0.17:46981 - 45804 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003021041s
	[INFO] 10.244.0.17:45757 - 35958 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000117016s
	[INFO] 10.244.0.17:45757 - 35611 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000117615s
	[INFO] 10.244.0.17:33773 - 25250 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000082971s
	[INFO] 10.244.0.17:33773 - 25008 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000115031s
	[INFO] 10.244.0.17:45513 - 55277 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00006657s
	[INFO] 10.244.0.17:45513 - 54805 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000102954s
	[INFO] 10.244.0.17:41720 - 51847 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122831s
	[INFO] 10.244.0.17:41720 - 51687 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152428s
	[INFO] 10.244.0.21:48768 - 30598 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000240724s
	[INFO] 10.244.0.21:50393 - 41717 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000285157s
	[INFO] 10.244.0.21:44432 - 38134 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109653s
	[INFO] 10.244.0.21:36012 - 50730 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179492s
	[INFO] 10.244.0.21:43626 - 13925 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119045s
	[INFO] 10.244.0.21:60398 - 26994 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143395s
	[INFO] 10.244.0.21:39800 - 30708 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004304357s
	[INFO] 10.244.0.21:38816 - 46239 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005490784s
	[INFO] 10.244.0.21:58176 - 45043 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004776562s
	[INFO] 10.244.0.21:56306 - 31935 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006690668s
	[INFO] 10.244.0.21:42337 - 14247 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005460615s
	[INFO] 10.244.0.21:35163 - 63669 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005798292s
	[INFO] 10.244.0.21:37011 - 48486 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004800581s
	[INFO] 10.244.0.21:59703 - 62947 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005460404s
	[INFO] 10.244.0.21:35596 - 49088 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001748502s
	[INFO] 10.244.0.21:44357 - 52035 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002260823s
	
	
	==> describe nodes <==
	Name:               addons-475995
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-475995
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=addons-475995
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_30_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-475995
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-475995"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:30:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-475995
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:32:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:32:15 +0000   Sat, 25 Oct 2025 08:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:32:15 +0000   Sat, 25 Oct 2025 08:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:32:15 +0000   Sat, 25 Oct 2025 08:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:32:15 +0000   Sat, 25 Oct 2025 08:30:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-475995
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                68791317-d9d7-499d-a824-0c15109dc003
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-zlql6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gadget                      gadget-n5ndm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-lch5j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-mdshg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         118s
	  kube-system                 amd-gpu-device-plugin-6mxn7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 coredns-66bc5c9577-8nfrz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpathplugin-kswpf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 etcd-addons-475995                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-r5lvv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-addons-475995                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-addons-475995        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-4qm6g                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-addons-475995                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 metrics-server-85b7d694d7-5wn89              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-lbh6g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 registry-6b586f9694-pw542                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-creds-764b6fb674-rq26r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-proxy-twv4t                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 snapshot-controller-7d9fbc56b8-8qx69         0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 snapshot-controller-7d9fbc56b8-mcjmk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-th4g2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2ntvm               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node addons-475995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node addons-475995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node addons-475995 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node addons-475995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node addons-475995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node addons-475995 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m1s                   node-controller  Node addons-475995 event: Registered Node addons-475995 in Controller
	  Normal  NodeReady                79s                    kubelet          Node addons-475995 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001003] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.092011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.411470] i8042: Warning: Keylock active
	[  +0.015621] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526084] block sda: the capability attribute has been deprecated.
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6] <==
	{"level":"warn","ts":"2025-10-25T08:30:09.253424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.259336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.265104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.271693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.278298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.285062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.291033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.296819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.302829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.308624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.315272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.330980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:09.343677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:20.504366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:20.511552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:46.792912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:46.799432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:46.824181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59090","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T08:31:16.670995Z","caller":"traceutil/trace.go:172","msg":"trace[1851652172] transaction","detail":"{read_only:false; response_revision:1082; number_of_response:1; }","duration":"120.454278ms","start":"2025-10-25T08:31:16.550519Z","end":"2025-10-25T08:31:16.670973Z","steps":["trace[1851652172] 'process raft request'  (duration: 120.324468ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:31:16.853583Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.299369ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:16.853690Z","caller":"traceutil/trace.go:172","msg":"trace[786712106] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1082; }","duration":"126.423617ms","start":"2025-10-25T08:31:16.727249Z","end":"2025-10-25T08:31:16.853672Z","steps":["trace[786712106] 'agreement among raft nodes before linearized reading'  (duration: 64.673615ms)","trace[786712106] 'range keys from in-memory index tree'  (duration: 61.576787ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:31:16.853735Z","caller":"traceutil/trace.go:172","msg":"trace[1627505551] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"178.010546ms","start":"2025-10-25T08:31:16.675706Z","end":"2025-10-25T08:31:16.853716Z","steps":["trace[1627505551] 'process raft request'  (duration: 116.226184ms)","trace[1627505551] 'compare'  (duration: 61.595534ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T08:31:16.877398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.415379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:16.877528Z","caller":"traceutil/trace.go:172","msg":"trace[1241973784] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1083; }","duration":"117.554095ms","start":"2025-10-25T08:31:16.759955Z","end":"2025-10-25T08:31:16.877509Z","steps":["trace[1241973784] 'agreement among raft nodes before linearized reading'  (duration: 93.810302ms)","trace[1241973784] 'range keys from in-memory index tree'  (duration: 23.58797ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:31:31.390692Z","caller":"traceutil/trace.go:172","msg":"trace[210320775] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"127.376797ms","start":"2025-10-25T08:31:31.263296Z","end":"2025-10-25T08:31:31.390673Z","steps":["trace[210320775] 'process raft request'  (duration: 62.840888ms)","trace[210320775] 'compare'  (duration: 64.375909ms)"],"step_count":2}
	
	
	==> gcp-auth [1e80a58fe258978872bb179984502f28d7bb245cad29c0add898927058c6beb6] <==
	2025/10/25 08:31:36 GCP Auth Webhook started!
	2025/10/25 08:32:08 Ready to marshal response ...
	2025/10/25 08:32:08 Ready to write response ...
	2025/10/25 08:32:08 Ready to marshal response ...
	2025/10/25 08:32:08 Ready to write response ...
	2025/10/25 08:32:08 Ready to marshal response ...
	2025/10/25 08:32:08 Ready to write response ...
	
	
	==> kernel <==
	 08:32:17 up 14 min,  0 user,  load average: 1.44, 0.80, 0.32
	Linux addons-475995 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44] <==
	I1025 08:30:18.780832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 08:30:18.780937       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 08:30:48.780573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 08:30:48.781610       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 08:30:48.781657       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 08:30:48.830100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 08:30:50.381064       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 08:30:50.381092       1 metrics.go:72] Registering metrics
	I1025 08:30:50.381159       1 controller.go:711] "Syncing nftables rules"
	I1025 08:30:58.787392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:30:58.787550       1 main.go:301] handling current node
	I1025 08:31:08.780523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:08.780572       1 main.go:301] handling current node
	I1025 08:31:18.780142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:18.780208       1 main.go:301] handling current node
	I1025 08:31:28.780893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:28.780935       1 main.go:301] handling current node
	I1025 08:31:38.780688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:38.780726       1 main.go:301] handling current node
	I1025 08:31:48.780633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:48.780744       1 main.go:301] handling current node
	I1025 08:31:58.781842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:58.781882       1 main.go:301] handling current node
	I1025 08:32:08.780064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:08.780096       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2] <==
	E1025 08:30:58.919752       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.18.217:443: connect: connection refused" logger="UnhandledError"
	W1025 08:30:58.920038       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.18.217:443: connect: connection refused
	E1025 08:30:58.920076       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.18.217:443: connect: connection refused" logger="UnhandledError"
	W1025 08:31:20.123947       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:31:20.124030       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 08:31:20.124040       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:31:20.127168       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:31:20.127217       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 08:31:20.127232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:31:37.589895       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:31:37.589976       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 08:31:37.590390       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.591900       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.597447       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.618280       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	E1025 08:31:37.659821       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.86.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.86.150:443: connect: connection refused" logger="UnhandledError"
	I1025 08:31:37.777418       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 08:32:16.032414       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47676: use of closed network connection
	E1025 08:32:16.178779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47702: use of closed network connection
	
	
	==> kube-controller-manager [b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83] <==
	I1025 08:30:16.776185       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 08:30:16.776277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 08:30:16.776282       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 08:30:16.776175       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:30:16.776350       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 08:30:16.776352       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 08:30:16.776365       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 08:30:16.776442       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 08:30:16.777544       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 08:30:16.777591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 08:30:16.778843       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 08:30:16.783069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:30:16.784309       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 08:30:16.794814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 08:30:19.206123       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 08:30:46.787157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:30:46.787312       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 08:30:46.787377       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 08:30:46.804375       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 08:30:46.810905       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 08:30:46.888258       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:30:46.911682       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:31:01.740919       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 08:31:16.892614       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:31:16.919463       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25] <==
	I1025 08:30:18.308803       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:30:18.413944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:30:18.514107       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:30:18.515465       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:30:18.515708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:30:18.777195       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:30:18.777317       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:30:18.827533       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:30:18.835085       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:30:18.835144       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:30:18.843993       1 config.go:200] "Starting service config controller"
	I1025 08:30:18.849172       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:30:18.845780       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:30:18.849309       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:30:18.845796       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:30:18.849361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:30:18.845336       1 config.go:309] "Starting node config controller"
	I1025 08:30:18.849405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:30:18.849437       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:30:18.949725       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:30:18.950316       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 08:30:18.950335       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e] <==
	E1025 08:30:09.796900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:30:09.797274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:30:09.797545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:30:09.797613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:09.797665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:09.797719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:30:09.797761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:30:09.797771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:30:09.797801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:09.797819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:30:09.797852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:09.797852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:30:09.797904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:09.797913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:09.798009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:09.798047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:10.692894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:10.752188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:10.905013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:10.983073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:10.998270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:11.003214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:11.008110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:11.075576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 08:30:12.895206       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 08:31:26 addons-475995 kubelet[1307]: I1025 08:31:26.552287    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6mxn7" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:31:26 addons-475995 kubelet[1307]: I1025 08:31:26.561252    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/metrics-server-85b7d694d7-5wn89" podStartSLOduration=40.524303082 podStartE2EDuration="1m7.561230839s" podCreationTimestamp="2025-10-25 08:30:19 +0000 UTC" firstStartedPulling="2025-10-25 08:30:59.346070483 +0000 UTC m=+47.118670814" lastFinishedPulling="2025-10-25 08:31:26.382998228 +0000 UTC m=+74.155598571" observedRunningTime="2025-10-25 08:31:26.560439754 +0000 UTC m=+74.333040098" watchObservedRunningTime="2025-10-25 08:31:26.561230839 +0000 UTC m=+74.333831185"
	Oct 25 08:31:30 addons-475995 kubelet[1307]: I1025 08:31:30.570065    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-lbh6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:31:30 addons-475995 kubelet[1307]: I1025 08:31:30.582080    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-lbh6g" podStartSLOduration=1.951616746 podStartE2EDuration="32.582062874s" podCreationTimestamp="2025-10-25 08:30:58 +0000 UTC" firstStartedPulling="2025-10-25 08:30:59.351109021 +0000 UTC m=+47.123709346" lastFinishedPulling="2025-10-25 08:31:29.981555149 +0000 UTC m=+77.754155474" observedRunningTime="2025-10-25 08:31:30.581303592 +0000 UTC m=+78.353903935" watchObservedRunningTime="2025-10-25 08:31:30.582062874 +0000 UTC m=+78.354663217"
	Oct 25 08:31:30 addons-475995 kubelet[1307]: E1025 08:31:30.834936    1307 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 25 08:31:30 addons-475995 kubelet[1307]: E1025 08:31:30.835032    1307 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2efaa5a3-60c5-4bdf-95a9-a203d74287d0-gcr-creds podName:2efaa5a3-60c5-4bdf-95a9-a203d74287d0 nodeName:}" failed. No retries permitted until 2025-10-25 08:32:02.835012101 +0000 UTC m=+110.607612445 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/2efaa5a3-60c5-4bdf-95a9-a203d74287d0-gcr-creds") pod "registry-creds-764b6fb674-rq26r" (UID: "2efaa5a3-60c5-4bdf-95a9-a203d74287d0") : secret "registry-creds-gcr" not found
	Oct 25 08:31:31 addons-475995 kubelet[1307]: I1025 08:31:31.573158    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-lbh6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:31:32 addons-475995 kubelet[1307]: I1025 08:31:32.580617    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-twv4t" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:31:32 addons-475995 kubelet[1307]: I1025 08:31:32.594172    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-twv4t" podStartSLOduration=2.254622241 podStartE2EDuration="34.594153362s" podCreationTimestamp="2025-10-25 08:30:58 +0000 UTC" firstStartedPulling="2025-10-25 08:30:59.368059887 +0000 UTC m=+47.140660215" lastFinishedPulling="2025-10-25 08:31:31.707591014 +0000 UTC m=+79.480191336" observedRunningTime="2025-10-25 08:31:32.593217954 +0000 UTC m=+80.365818297" watchObservedRunningTime="2025-10-25 08:31:32.594153362 +0000 UTC m=+80.366753706"
	Oct 25 08:31:33 addons-475995 kubelet[1307]: I1025 08:31:33.583872    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-twv4t" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:31:35 addons-475995 kubelet[1307]: I1025 08:31:35.603911    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-mdshg" podStartSLOduration=56.331782774 podStartE2EDuration="1m16.603888069s" podCreationTimestamp="2025-10-25 08:30:19 +0000 UTC" firstStartedPulling="2025-10-25 08:31:15.096408426 +0000 UTC m=+62.869008748" lastFinishedPulling="2025-10-25 08:31:35.368513708 +0000 UTC m=+83.141114043" observedRunningTime="2025-10-25 08:31:35.603232715 +0000 UTC m=+83.375833070" watchObservedRunningTime="2025-10-25 08:31:35.603888069 +0000 UTC m=+83.376488412"
	Oct 25 08:31:37 addons-475995 kubelet[1307]: I1025 08:31:37.624702    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-lch5j" podStartSLOduration=50.181302344 podStartE2EDuration="1m11.624680967s" podCreationTimestamp="2025-10-25 08:30:26 +0000 UTC" firstStartedPulling="2025-10-25 08:31:15.126093989 +0000 UTC m=+62.898694311" lastFinishedPulling="2025-10-25 08:31:36.569472599 +0000 UTC m=+84.342072934" observedRunningTime="2025-10-25 08:31:37.623550036 +0000 UTC m=+85.396150380" watchObservedRunningTime="2025-10-25 08:31:37.624680967 +0000 UTC m=+85.397281310"
	Oct 25 08:31:39 addons-475995 kubelet[1307]: I1025 08:31:39.626059    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-n5ndm" podStartSLOduration=65.375377345 podStartE2EDuration="1m20.626039374s" podCreationTimestamp="2025-10-25 08:30:19 +0000 UTC" firstStartedPulling="2025-10-25 08:31:23.625514633 +0000 UTC m=+71.398114955" lastFinishedPulling="2025-10-25 08:31:38.876176659 +0000 UTC m=+86.648776984" observedRunningTime="2025-10-25 08:31:39.624818687 +0000 UTC m=+87.397419065" watchObservedRunningTime="2025-10-25 08:31:39.626039374 +0000 UTC m=+87.398639717"
	Oct 25 08:31:41 addons-475995 kubelet[1307]: I1025 08:31:41.368337    1307 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 25 08:31:41 addons-475995 kubelet[1307]: I1025 08:31:41.368383    1307 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 25 08:31:43 addons-475995 kubelet[1307]: I1025 08:31:43.656106    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-kswpf" podStartSLOduration=1.903808223 podStartE2EDuration="45.65608292s" podCreationTimestamp="2025-10-25 08:30:58 +0000 UTC" firstStartedPulling="2025-10-25 08:30:59.349875403 +0000 UTC m=+47.122475729" lastFinishedPulling="2025-10-25 08:31:43.102150098 +0000 UTC m=+90.874750426" observedRunningTime="2025-10-25 08:31:43.655698467 +0000 UTC m=+91.428298810" watchObservedRunningTime="2025-10-25 08:31:43.65608292 +0000 UTC m=+91.428683263"
	Oct 25 08:31:46 addons-475995 kubelet[1307]: I1025 08:31:46.306307    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ee21a6-a0d7-4c8e-8f4f-3309c5d267a6" path="/var/lib/kubelet/pods/73ee21a6-a0d7-4c8e-8f4f-3309c5d267a6/volumes"
	Oct 25 08:31:56 addons-475995 kubelet[1307]: I1025 08:31:56.305346    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3628807b-b58a-4992-8dfb-c36566461db4" path="/var/lib/kubelet/pods/3628807b-b58a-4992-8dfb-c36566461db4/volumes"
	Oct 25 08:32:02 addons-475995 kubelet[1307]: E1025 08:32:02.890253    1307 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 25 08:32:02 addons-475995 kubelet[1307]: E1025 08:32:02.890367    1307 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2efaa5a3-60c5-4bdf-95a9-a203d74287d0-gcr-creds podName:2efaa5a3-60c5-4bdf-95a9-a203d74287d0 nodeName:}" failed. No retries permitted until 2025-10-25 08:33:06.89034347 +0000 UTC m=+174.662943812 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/2efaa5a3-60c5-4bdf-95a9-a203d74287d0-gcr-creds") pod "registry-creds-764b6fb674-rq26r" (UID: "2efaa5a3-60c5-4bdf-95a9-a203d74287d0") : secret "registry-creds-gcr" not found
	Oct 25 08:32:09 addons-475995 kubelet[1307]: I1025 08:32:09.036535    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bd7ad6d8-21ea-4f20-9bbd-79df26ebdc4d-gcp-creds\") pod \"busybox\" (UID: \"bd7ad6d8-21ea-4f20-9bbd-79df26ebdc4d\") " pod="default/busybox"
	Oct 25 08:32:09 addons-475995 kubelet[1307]: I1025 08:32:09.036596    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9jn\" (UniqueName: \"kubernetes.io/projected/bd7ad6d8-21ea-4f20-9bbd-79df26ebdc4d-kube-api-access-cq9jn\") pod \"busybox\" (UID: \"bd7ad6d8-21ea-4f20-9bbd-79df26ebdc4d\") " pod="default/busybox"
	Oct 25 08:32:10 addons-475995 kubelet[1307]: I1025 08:32:10.748534    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.062341091 podStartE2EDuration="2.748512625s" podCreationTimestamp="2025-10-25 08:32:08 +0000 UTC" firstStartedPulling="2025-10-25 08:32:09.212705048 +0000 UTC m=+116.985305370" lastFinishedPulling="2025-10-25 08:32:09.89887657 +0000 UTC m=+117.671476904" observedRunningTime="2025-10-25 08:32:10.747516024 +0000 UTC m=+118.520116367" watchObservedRunningTime="2025-10-25 08:32:10.748512625 +0000 UTC m=+118.521112967"
	Oct 25 08:32:12 addons-475995 kubelet[1307]: I1025 08:32:12.293444    1307 scope.go:117] "RemoveContainer" containerID="dfcff3953d3097cf494dec94ea0f08e081fc711f0e427e1f7cd662e8347ba746"
	Oct 25 08:32:12 addons-475995 kubelet[1307]: I1025 08:32:12.302046    1307 scope.go:117] "RemoveContainer" containerID="73ff29340e790e916b7bf3419f1bff8f2164340344654f3422ea89642b6e4133"
	
	
	==> storage-provisioner [76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f] <==
	W1025 08:31:53.716560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:31:55.719328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:31:55.723177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:31:57.726246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:31:57.731452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:31:59.734521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:31:59.738231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:01.740928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:01.744841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:03.748101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:03.752614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:05.755771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:05.760962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:07.763603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:07.767579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:09.771845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:09.777668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:11.780399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:11.784068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:13.786895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:13.793476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:15.796505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:15.800999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:17.803770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:32:17.808815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-475995 -n addons-475995
helpers_test.go:269: (dbg) Run:  kubectl --context addons-475995 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr registry-creds-764b6fb674-rq26r
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-475995 describe pod ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr registry-creds-764b6fb674-rq26r
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-475995 describe pod ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr registry-creds-764b6fb674-rq26r: exit status 1 (59.955983ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2j77z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-49wjr" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rq26r" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-475995 describe pod ingress-nginx-admission-create-2j77z ingress-nginx-admission-patch-49wjr registry-creds-764b6fb674-rq26r: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable headlamp --alsologtostderr -v=1: exit status 11 (242.968297ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:18.834400   20202 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:18.834552   20202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:18.834562   20202 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:18.834566   20202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:18.834779   20202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:18.835019   20202 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:18.835335   20202 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:18.835349   20202 addons.go:606] checking whether the cluster is paused
	I1025 08:32:18.835425   20202 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:18.835440   20202 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:18.835816   20202 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:18.852814   20202 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:18.852868   20202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:18.870633   20202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:18.969565   20202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:18.969700   20202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:18.998424   20202 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:18.998457   20202 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:18.998461   20202 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:18.998465   20202 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:18.998468   20202 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:18.998472   20202 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:18.998474   20202 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:18.998477   20202 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:18.998479   20202 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:18.998489   20202 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:18.998492   20202 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:18.998495   20202 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:18.998498   20202 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:18.998500   20202 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:18.998503   20202 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:18.998514   20202 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:18.998518   20202 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:18.998523   20202 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:18.998525   20202 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:18.998527   20202 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:18.998532   20202 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:18.998534   20202 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:18.998536   20202 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:18.998539   20202 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:18.998541   20202 cri.go:89] found id: ""
	I1025 08:32:18.998583   20202 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:19.012991   20202 out.go:203] 
	W1025 08:32:19.014447   20202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:19.014467   20202 out.go:285] * 
	* 
	W1025 08:32:19.017422   20202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:19.018751   20202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-zlql6" [831063ee-04a9-4748-94c1-d9014faead61] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003341576s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (276.931616ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:26.766180   20806 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:26.766446   20806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:26.766453   20806 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:26.766458   20806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:26.766678   20806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:26.766908   20806 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:26.767258   20806 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:26.767280   20806 addons.go:606] checking whether the cluster is paused
	I1025 08:32:26.767425   20806 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:26.767446   20806 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:26.767993   20806 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:26.789920   20806 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:26.789989   20806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:26.811073   20806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:26.915145   20806 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:26.915237   20806 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:26.948692   20806 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:26.948714   20806 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:26.948721   20806 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:26.948725   20806 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:26.948730   20806 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:26.948734   20806 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:26.948737   20806 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:26.948739   20806 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:26.948742   20806 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:26.948747   20806 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:26.948751   20806 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:26.948755   20806 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:26.948760   20806 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:26.948775   20806 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:26.948783   20806 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:26.948790   20806 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:26.948797   20806 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:26.948802   20806 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:26.948805   20806 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:26.948808   20806 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:26.948810   20806 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:26.948819   20806 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:26.948824   20806 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:26.948827   20806 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:26.948829   20806 cri.go:89] found id: ""
	I1025 08:32:26.948866   20806 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:26.964789   20806 out.go:203] 
	W1025 08:32:26.966210   20806 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:26.966229   20806 out.go:285] * 
	* 
	W1025 08:32:26.969247   20806 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:26.970999   20806 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-475995 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-475995 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-475995 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [34b620c9-5196-4692-9142-8bc90a8f3087] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [34b620c9-5196-4692-9142-8bc90a8f3087] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [34b620c9-5196-4692-9142-8bc90a8f3087] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003087041s
addons_test.go:967: (dbg) Run:  kubectl --context addons-475995 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 ssh "cat /opt/local-path-provisioner/pvc-a0dc3c3f-7548-4bee-b78c-92c7ac072de7_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-475995 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-475995 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (264.921033ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:27.001091   20941 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:27.001389   20941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:27.001400   20941 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:27.001405   20941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:27.001629   20941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:27.001966   20941 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:27.002334   20941 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:27.002352   20941 addons.go:606] checking whether the cluster is paused
	I1025 08:32:27.002470   20941 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:27.002497   20941 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:27.003013   20941 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:27.021125   20941 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:27.021179   20941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:27.040177   20941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:27.140659   20941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:27.140739   20941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:27.172954   20941 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:27.172990   20941 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:27.172996   20941 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:27.173001   20941 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:27.173005   20941 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:27.173010   20941 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:27.173015   20941 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:27.173019   20941 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:27.173023   20941 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:27.173034   20941 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:27.173043   20941 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:27.173048   20941 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:27.173052   20941 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:27.173056   20941 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:27.173060   20941 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:27.173074   20941 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:27.173084   20941 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:27.173090   20941 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:27.173094   20941 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:27.173098   20941 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:27.173102   20941 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:27.173106   20941 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:27.173111   20941 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:27.173115   20941 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:27.173119   20941 cri.go:89] found id: ""
	I1025 08:32:27.173191   20941 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:27.189076   20941 out.go:203] 
	W1025 08:32:27.190620   20941 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:27.190660   20941 out.go:285] * 
	* 
	W1025 08:32:27.194890   20941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:27.196347   20941 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-lbh6g" [33628f67-484d-40f4-8741-3818c92aae77] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004037385s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (252.846164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:21.495055   20390 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:21.495340   20390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:21.495351   20390 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:21.495357   20390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:21.495586   20390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:21.495882   20390 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:21.496230   20390 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:21.496247   20390 addons.go:606] checking whether the cluster is paused
	I1025 08:32:21.496375   20390 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:21.496398   20390 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:21.496796   20390 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:21.515196   20390 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:21.515238   20390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:21.534058   20390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:21.633425   20390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:21.633526   20390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:21.663890   20390 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:21.663916   20390 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:21.663922   20390 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:21.663926   20390 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:21.663930   20390 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:21.663934   20390 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:21.663939   20390 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:21.663944   20390 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:21.663949   20390 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:21.663957   20390 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:21.663962   20390 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:21.663966   20390 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:21.663975   20390 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:21.663980   20390 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:21.663988   20390 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:21.664000   20390 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:21.664008   20390 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:21.664015   20390 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:21.664019   20390 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:21.664023   20390 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:21.664027   20390 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:21.664031   20390 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:21.664035   20390 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:21.664039   20390 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:21.664043   20390 cri.go:89] found id: ""
	I1025 08:32:21.664085   20390 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:21.679475   20390 out.go:203] 
	W1025 08:32:21.680874   20390 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:21.680900   20390 out.go:285] * 
	* 
	W1025 08:32:21.684462   20390 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:21.685661   20390 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2ntvm" [86ecc3d9-dbae-4bc9-8863-d89b1c2aac91] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002690426s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable yakd --alsologtostderr -v=1: exit status 11 (274.604815ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:26.765309   20807 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:26.765631   20807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:26.765650   20807 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:26.765655   20807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:26.765843   20807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:26.766108   20807 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:26.766486   20807 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:26.766501   20807 addons.go:606] checking whether the cluster is paused
	I1025 08:32:26.766585   20807 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:26.766602   20807 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:26.766964   20807 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:26.790106   20807 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:26.790162   20807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:26.809546   20807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:26.913875   20807 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:26.913964   20807 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:26.949825   20807 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:26.949860   20807 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:26.949866   20807 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:26.949871   20807 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:26.949875   20807 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:26.949880   20807 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:26.949884   20807 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:26.949890   20807 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:26.949894   20807 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:26.949906   20807 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:26.949915   20807 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:26.949919   20807 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:26.949923   20807 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:26.949927   20807 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:26.949939   20807 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:26.949949   20807 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:26.949953   20807 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:26.949959   20807 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:26.949963   20807 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:26.949966   20807 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:26.949970   20807 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:26.949973   20807 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:26.949977   20807 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:26.949981   20807 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:26.949984   20807 cri.go:89] found id: ""
	I1025 08:32:26.950050   20807 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:26.963917   20807 out.go:203] 
	W1025 08:32:26.965571   20807 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:26.965597   20807 out.go:285] * 
	* 
	W1025 08:32:26.968716   20807 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:26.970290   20807 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.28s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-6mxn7" [264ef157-233d-407d-84d5-8a48574edca7] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004001715s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-475995 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-475995 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (260.276056ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:21.500991   20391 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:21.501145   20391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:21.501155   20391 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:21.501159   20391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:21.501374   20391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:32:21.501625   20391 mustload.go:65] Loading cluster: addons-475995
	I1025 08:32:21.501963   20391 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:21.501976   20391 addons.go:606] checking whether the cluster is paused
	I1025 08:32:21.502049   20391 config.go:182] Loaded profile config "addons-475995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:21.502063   20391 host.go:66] Checking if "addons-475995" exists ...
	I1025 08:32:21.502435   20391 cli_runner.go:164] Run: docker container inspect addons-475995 --format={{.State.Status}}
	I1025 08:32:21.520597   20391 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:21.520675   20391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-475995
	I1025 08:32:21.539920   20391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/addons-475995/id_rsa Username:docker}
	I1025 08:32:21.638516   20391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:21.638607   20391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:21.670384   20391 cri.go:89] found id: "bab891b7af1f44dfa96d5374a8dfbbccb1a81d9f6b7d10c3682110b27f9aa980"
	I1025 08:32:21.670411   20391 cri.go:89] found id: "22f2b9269ef0296b625e3f5ee6b9f74da646ad0ba1904a116486ff0f6e778417"
	I1025 08:32:21.670415   20391 cri.go:89] found id: "8de87df506db79d60005e503a3465ac71beff3cc63c60d3e26696196422e4887"
	I1025 08:32:21.670419   20391 cri.go:89] found id: "101a2932de347b467fd124912a2cd48590c36b71b2d7cc537ed7a5a489707155"
	I1025 08:32:21.670422   20391 cri.go:89] found id: "7f9bf3508d18310cfb92d30b86404e2c85364f876c797b5ea6cc70583786ea07"
	I1025 08:32:21.670424   20391 cri.go:89] found id: "b23168cf49c8b135c4b0855383c2149d3315f2bfd664bb902eebbc8ab166d649"
	I1025 08:32:21.670427   20391 cri.go:89] found id: "9ebf3371442349467aab01758dbee5af097c433d321f48f345aa2bb16763e715"
	I1025 08:32:21.670429   20391 cri.go:89] found id: "e6efa48ea6a2fdf016daf9e374461f27ae7aa582f99f50c77a05a3e9b66ec29b"
	I1025 08:32:21.670432   20391 cri.go:89] found id: "2107300ec375f2596d6d5d8c19582149d3c48b7bc25aa0fed4f9abee0549d6b5"
	I1025 08:32:21.670476   20391 cri.go:89] found id: "74693a35fd3fca78a3c52945bedd68fa22f31bf1facd96ce1cdeefbb0907af56"
	I1025 08:32:21.670481   20391 cri.go:89] found id: "7358a40adba975b4e3d508d56d4b78110f94804a3c9dd55252440f202bd5e7da"
	I1025 08:32:21.670485   20391 cri.go:89] found id: "2f476752a0079039e796863b81ecf0e4a4e0545fa2ca0c4bf266c45810c5d1f1"
	I1025 08:32:21.670490   20391 cri.go:89] found id: "956b214b91f1ce8b11ff7a99645d5b25bca4b8db2cb2126eae99b9c4951e0413"
	I1025 08:32:21.670494   20391 cri.go:89] found id: "ecf62df96b889016d4e67084441bd9ef81bcca4c83c681373047220e8aa24cdc"
	I1025 08:32:21.670498   20391 cri.go:89] found id: "d30403917ed891140b8f4f3158092dd4396d6e5eadbcee892ec6d0426fecd9e9"
	I1025 08:32:21.670504   20391 cri.go:89] found id: "09848150de89248d854a4fa7aad410b781ff8ab23b361db68b035282110d4acb"
	I1025 08:32:21.670512   20391 cri.go:89] found id: "02939bc11915d9ab0c7a0a19146e021cb0c5517db90b1519d873ca0ffb2cafdc"
	I1025 08:32:21.670517   20391 cri.go:89] found id: "76b61de4dd3d6a45a62872d8ecf7aa1be7effe1ba62c3b2e8781ea7aedccc29f"
	I1025 08:32:21.670520   20391 cri.go:89] found id: "ca5be89b6d5481fdab2ed512dc4c6666d9d95aff7aa849cfed7f2b69682e9b25"
	I1025 08:32:21.670523   20391 cri.go:89] found id: "19c714713a8d684612f271dc44ef2c686b9725c0ac373f1d2a105cbbcd7cbc44"
	I1025 08:32:21.670536   20391 cri.go:89] found id: "b8679170a4379917442437cbd58b1c059cff39ef642f8bb771185c80adb84d83"
	I1025 08:32:21.670544   20391 cri.go:89] found id: "7ca23082c83a45f4e9e97bf33116d960f8d5e4d1e6fc2cb507163913386f35d2"
	I1025 08:32:21.670549   20391 cri.go:89] found id: "c092ee6bc7618571c02e6e8a7868806fea6e5717dcad66dffe7e94e7c6be722e"
	I1025 08:32:21.670556   20391 cri.go:89] found id: "8f6f29d5a814cc403538a5b13b8cf6bb66ddb597f68a0ff08f227c4283a62ee6"
	I1025 08:32:21.670561   20391 cri.go:89] found id: ""
	I1025 08:32:21.670610   20391 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:21.684615   20391 out.go:203] 
	W1025 08:32:21.685738   20391 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:21.685767   20391 out.go:285] * 
	* 
	W1025 08:32:21.688768   20391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:21.690685   20391 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-475995 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-734361 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-734361 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-x4zb8" [e0654921-2ea8-4fb6-8d7a-77a1b9c744d4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-734361 -n functional-734361
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-25 08:48:02.522221591 +0000 UTC m=+1120.994035374
functional_test.go:1645: (dbg) Run:  kubectl --context functional-734361 describe po hello-node-connect-7d85dfc575-x4zb8 -n default
functional_test.go:1645: (dbg) kubectl --context functional-734361 describe po hello-node-connect-7d85dfc575-x4zb8 -n default:
Name:             hello-node-connect-7d85dfc575-x4zb8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-734361/192.168.49.2
Start Time:       Sat, 25 Oct 2025 08:38:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lmcg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8lmcg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-x4zb8 to functional-734361
Normal   Pulling    7m (x5 over 9m59s)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 9m59s)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 9m59s)      kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-734361 logs hello-node-connect-7d85dfc575-x4zb8 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-734361 logs hello-node-connect-7d85dfc575-x4zb8 -n default: exit status 1 (61.662601ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-x4zb8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-734361 logs hello-node-connect-7d85dfc575-x4zb8 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-734361 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-x4zb8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-734361/192.168.49.2
Start Time:       Sat, 25 Oct 2025 08:38:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lmcg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8lmcg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-x4zb8 to functional-734361
Normal   Pulling    7m (x5 over 9m59s)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 9m59s)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 9m59s)      kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-734361 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-734361 logs -l app=hello-node-connect: exit status 1 (61.947992ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-x4zb8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-734361 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-734361 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.117.187
IPs:                      10.97.117.187
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31381/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-734361
helpers_test.go:243: (dbg) docker inspect functional-734361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a",
	        "Created": "2025-10-25T08:36:02.111268398Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:36:02.145024521Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a/d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a-json.log",
	        "Name": "/functional-734361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-734361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-734361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1a81fb14aa7e14d53e0e2c9cf4eb9cfdc5b387891607dbfeabbca9c3a38843a",
	                "LowerDir": "/var/lib/docker/overlay2/ed5aad1bef15f623b8d706d738da612dd6723fbf8e280ab2632901d4e0d5dd42-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed5aad1bef15f623b8d706d738da612dd6723fbf8e280ab2632901d4e0d5dd42/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed5aad1bef15f623b8d706d738da612dd6723fbf8e280ab2632901d4e0d5dd42/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed5aad1bef15f623b8d706d738da612dd6723fbf8e280ab2632901d4e0d5dd42/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-734361",
	                "Source": "/var/lib/docker/volumes/functional-734361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-734361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-734361",
	                "name.minikube.sigs.k8s.io": "functional-734361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5dfe2e91fd69e7f6bd6e53d2fb6cf2ba0d5599e1e6b21ba4829d707764b35ffc",
	            "SandboxKey": "/var/run/docker/netns/5dfe2e91fd69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-734361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:f0:a4:31:c9:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86727ac3b2730af06b4fb45e37a67f204fc4070d70c4eb08e92d48744003a02e",
	                    "EndpointID": "11d21664ec86862046b59a1eb23b18a60b41d3ee59e1fcb179b6789f5fa96113",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-734361",
	                        "d1a81fb14aa7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-734361 -n functional-734361
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-734361 logs -n 25: (1.263414274s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-734361 update-context --alsologtostderr -v=2                                                                                                         │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ update-context │ functional-734361 update-context --alsologtostderr -v=2                                                                                                         │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ update-context │ functional-734361 update-context --alsologtostderr -v=2                                                                                                         │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls                                                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image load --daemon kicbase/echo-server:functional-734361 --alsologtostderr                                                                   │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls                                                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image load --daemon kicbase/echo-server:functional-734361 --alsologtostderr                                                                   │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls                                                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image save kicbase/echo-server:functional-734361 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image rm kicbase/echo-server:functional-734361 --alsologtostderr                                                                              │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls                                                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image save --daemon kicbase/echo-server:functional-734361 --alsologtostderr                                                                   │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls --format short --alsologtostderr                                                                                                     │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │                     │
	│ image          │ functional-734361 image ls --format yaml --alsologtostderr                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ ssh            │ functional-734361 ssh pgrep buildkitd                                                                                                                           │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │                     │
	│ image          │ functional-734361 image ls --format json --alsologtostderr                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls --format table --alsologtostderr                                                                                                     │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image build -t localhost/my-image:functional-734361 testdata/build --alsologtostderr                                                          │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ image          │ functional-734361 image ls                                                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:38 UTC │ 25 Oct 25 08:38 UTC │
	│ service        │ functional-734361 service list                                                                                                                                  │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:47 UTC │ 25 Oct 25 08:47 UTC │
	│ service        │ functional-734361 service list -o json                                                                                                                          │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:47 UTC │ 25 Oct 25 08:47 UTC │
	│ service        │ functional-734361 service --namespace=default --https --url hello-node                                                                                          │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:47 UTC │                     │
	│ service        │ functional-734361 service hello-node --url --format={{.IP}}                                                                                                     │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:47 UTC │                     │
	│ service        │ functional-734361 service hello-node --url                                                                                                                      │ functional-734361 │ jenkins │ v1.37.0 │ 25 Oct 25 08:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:37:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:37:44.901308   43363 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:37:44.901416   43363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:37:44.901433   43363 out.go:374] Setting ErrFile to fd 2...
	I1025 08:37:44.901439   43363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:37:44.901733   43363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:37:44.902298   43363 out.go:368] Setting JSON to false
	I1025 08:37:44.903337   43363 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1213,"bootTime":1761380252,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:37:44.903438   43363 start.go:141] virtualization: kvm guest
	I1025 08:37:44.905012   43363 out.go:179] * [functional-734361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:37:44.906309   43363 notify.go:220] Checking for updates...
	I1025 08:37:44.906329   43363 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:37:44.907548   43363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:37:44.908758   43363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:37:44.910100   43363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:37:44.911502   43363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:37:44.912771   43363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:37:44.914189   43363 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:37:44.914625   43363 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:37:44.942262   43363 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:37:44.942406   43363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:37:45.012548   43363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 08:37:44.999723002 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:37:45.012732   43363 docker.go:318] overlay module found
	I1025 08:37:45.015465   43363 out.go:179] * Using the docker driver based on existing profile
	I1025 08:37:45.017078   43363 start.go:305] selected driver: docker
	I1025 08:37:45.017102   43363 start.go:925] validating driver "docker" against &{Name:functional-734361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-734361 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:37:45.017238   43363 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:37:45.017365   43363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:37:45.092747   43363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 08:37:45.082266138 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:37:45.093411   43363 cni.go:84] Creating CNI manager for ""
	I1025 08:37:45.093466   43363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:37:45.093508   43363 start.go:349] cluster config:
	{Name:functional-734361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-734361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:37:45.095164   43363 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 25 08:38:15 functional-734361 crio[3594]: time="2025-10-25T08:38:15.508526201Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-734361 found" id=32fc39cd-f35f-4f4e-8575-438575b5c032 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:38:18 functional-734361 crio[3594]: time="2025-10-25T08:38:18.464573179Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f19cc54d-a8f9-4d80-930e-44274f3dbb96 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.451403833Z" level=info msg="Stopping pod sandbox: 67100cee9fdd1dd7ccabf27aec49802577dd193e906a5bfe57f268563ea0c695" id=771781bb-9a24-4b1d-a5db-cf0cfffde82f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.45146483Z" level=info msg="Stopped pod sandbox (already stopped): 67100cee9fdd1dd7ccabf27aec49802577dd193e906a5bfe57f268563ea0c695" id=771781bb-9a24-4b1d-a5db-cf0cfffde82f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.451921727Z" level=info msg="Removing pod sandbox: 67100cee9fdd1dd7ccabf27aec49802577dd193e906a5bfe57f268563ea0c695" id=89cd7f5e-be51-416b-9ee5-c16235e409b0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.454941579Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.45500171Z" level=info msg="Removed pod sandbox: 67100cee9fdd1dd7ccabf27aec49802577dd193e906a5bfe57f268563ea0c695" id=89cd7f5e-be51-416b-9ee5-c16235e409b0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.45542258Z" level=info msg="Stopping pod sandbox: 07e51b16caa87e3b02fd1b09c4c01adeb575fecf2a0934597573ac3e9561589a" id=ec5ae35f-c256-4553-9e10-aefd927035cd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.45546079Z" level=info msg="Stopped pod sandbox (already stopped): 07e51b16caa87e3b02fd1b09c4c01adeb575fecf2a0934597573ac3e9561589a" id=ec5ae35f-c256-4553-9e10-aefd927035cd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.45573208Z" level=info msg="Removing pod sandbox: 07e51b16caa87e3b02fd1b09c4c01adeb575fecf2a0934597573ac3e9561589a" id=35e4ca5a-d3c1-44b5-b329-5a725081c655 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.457868898Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.457931719Z" level=info msg="Removed pod sandbox: 07e51b16caa87e3b02fd1b09c4c01adeb575fecf2a0934597573ac3e9561589a" id=35e4ca5a-d3c1-44b5-b329-5a725081c655 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.458261643Z" level=info msg="Stopping pod sandbox: 34c0c65a5aa7cbdbbeaafbd844a2a72f7dffd0663841b67d8fe1452cf4c1042b" id=f6eb4b70-fbd7-4ce2-88a0-ad51922170ff name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.45830446Z" level=info msg="Stopped pod sandbox (already stopped): 34c0c65a5aa7cbdbbeaafbd844a2a72f7dffd0663841b67d8fe1452cf4c1042b" id=f6eb4b70-fbd7-4ce2-88a0-ad51922170ff name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.458743315Z" level=info msg="Removing pod sandbox: 34c0c65a5aa7cbdbbeaafbd844a2a72f7dffd0663841b67d8fe1452cf4c1042b" id=4d893770-7431-4e86-acf5-ed6e285b2f1c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.460701201Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 08:38:20 functional-734361 crio[3594]: time="2025-10-25T08:38:20.460747694Z" level=info msg="Removed pod sandbox: 34c0c65a5aa7cbdbbeaafbd844a2a72f7dffd0663841b67d8fe1452cf4c1042b" id=4d893770-7431-4e86-acf5-ed6e285b2f1c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:38:27 functional-734361 crio[3594]: time="2025-10-25T08:38:27.464211387Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a8fe86cf-87b5-4776-b84a-5f9ac4966ee8 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:38:43 functional-734361 crio[3594]: time="2025-10-25T08:38:43.463926463Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f6cfc123-842b-4079-b747-e07ec4042b43 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:39:12 functional-734361 crio[3594]: time="2025-10-25T08:39:12.464734139Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1ef204c8-4589-442a-b17b-17de57942c57 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:39:34 functional-734361 crio[3594]: time="2025-10-25T08:39:34.464066158Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c48e1c58-597f-48b8-9644-610106a2d517 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:40:43 functional-734361 crio[3594]: time="2025-10-25T08:40:43.464556018Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=01087a03-3295-4630-8db2-e304f5472ef1 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:41:02 functional-734361 crio[3594]: time="2025-10-25T08:41:02.464609908Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=058fca7c-3d11-43f3-9e3f-8e67958d1736 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:43:29 functional-734361 crio[3594]: time="2025-10-25T08:43:29.464123497Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a8b0cf00-fe30-4091-bf62-960a458a13da name=/runtime.v1.ImageService/PullImage
	Oct 25 08:43:50 functional-734361 crio[3594]: time="2025-10-25T08:43:50.464251209Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b69f01ac-975b-48dd-8883-4c1d8d6bceb3 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	563d42353a21f       docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8                  9 minutes ago       Running             myfrontend                  0                   85c561a9625b4       sp-pod                                       default
	0c11a3561883e       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   6bbf2fdf2e52b       mysql-5bb876957f-t5w77                       default
	d866c34e1099b       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   0ee9cd1b07556       nginx-svc                                    default
	8f69c59dc58e3       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   4d6f5c32f920d       dashboard-metrics-scraper-77bf4d6c4c-z8wdl   kubernetes-dashboard
	74665363d81da       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   7ea486aae2cc2       kubernetes-dashboard-855c9754f9-hsv5r        kubernetes-dashboard
	9fc2f4a6d3157       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   ebfc3e6bbacb9       busybox-mount                                default
	d9436faf906da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   6c04cb3a7a408       kube-apiserver-functional-734361             kube-system
	11e8180c19ee4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   a11294041370c       kube-controller-manager-functional-734361    kube-system
	e8efcf428f7bd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   504402ef2dc65       kube-scheduler-functional-734361             kube-system
	ef04d7ea5f1ed       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   62a40033153b3       etcd-functional-734361                       kube-system
	0bff7765f8633       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   614228042f807       kube-proxy-b56jq                             kube-system
	0917c9863eb7a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   0c4f5d277a58b       kindnet-hf77v                                kube-system
	c993214762674       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   589d1c1061057       coredns-66bc5c9577-kd9z6                     kube-system
	80336c9a286db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   b0c4fae8db562       storage-provisioner                          kube-system
	11159beb4990a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   589d1c1061057       coredns-66bc5c9577-kd9z6                     kube-system
	71d256fbaa0f9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   b0c4fae8db562       storage-provisioner                          kube-system
	41ae60d9ff41b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   0c4f5d277a58b       kindnet-hf77v                                kube-system
	2d2a4992b42ae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   614228042f807       kube-proxy-b56jq                             kube-system
	1b1050d405021       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   62a40033153b3       etcd-functional-734361                       kube-system
	fc80ac041d864       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   504402ef2dc65       kube-scheduler-functional-734361             kube-system
	806125129e3fe       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   a11294041370c       kube-controller-manager-functional-734361    kube-system
	
	
	==> coredns [11159beb4990aecdb2edf1251f83b035151198006b85560c982deaf15dc750dc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60469 - 12539 "HINFO IN 830941209147158791.6589486576616605785. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05500155s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c993214762674f8109365468aaa095a3872aebe30416f02b6f1e4154bce1a1db] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51578 - 19408 "HINFO IN 7102430953048595902.2115928949511561307. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111771745s
	
	
	==> describe nodes <==
	Name:               functional-734361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-734361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=functional-734361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_36_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:36:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-734361
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:47:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:45:22 +0000   Sat, 25 Oct 2025 08:36:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:45:22 +0000   Sat, 25 Oct 2025 08:36:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:45:22 +0000   Sat, 25 Oct 2025 08:36:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:45:22 +0000   Sat, 25 Oct 2025 08:36:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-734361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                47567028-cc6d-4542-b727-ae087a6c3e0e
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-qzsxp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-x4zb8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-t5w77                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-kd9z6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-734361                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-hf77v                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-734361              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-734361     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-b56jq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-734361              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-z8wdl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hsv5r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-734361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-734361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-734361 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-734361 event: Registered Node functional-734361 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-734361 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-734361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-734361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-734361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-734361 event: Registered Node functional-734361 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [1b1050d4050218126f310237e7c1b51d84062d5e957d6238b2d4b42b2235d4bc] <==
	{"level":"warn","ts":"2025-10-25T08:36:16.144215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:36:16.150234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:36:16.156476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:36:16.162443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:36:16.175883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:36:16.189601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:36:16.234758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33472","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T08:37:00.555245Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T08:37:00.555325Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-734361","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T08:37:00.555423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T08:37:07.556948Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T08:37:07.557077Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:37:07.557133Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-25T08:37:07.557188Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T08:37:07.557157Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T08:37:07.557205Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T08:37:07.557210Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T08:37:07.557223Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T08:37:07.557252Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T08:37:07.557282Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T08:37:07.557294Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:37:07.560123Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T08:37:07.560178Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:37:07.560204Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T08:37:07.560213Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-734361","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ef04d7ea5f1edd10be7f889dfa29ec424849cd10cf10f2cb581c1402f9967752] <==
	{"level":"warn","ts":"2025-10-25T08:37:21.993554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.000809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.007111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.013237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.019583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.026376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.033174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.040616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.047085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.054132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.060456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.067703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.073771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.079897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.086407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.097770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.104275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.119273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.125725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.132672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:37:22.173082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49726","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T08:38:03.144130Z","caller":"traceutil/trace.go:172","msg":"trace[407230028] transaction","detail":"{read_only:false; response_revision:798; number_of_response:1; }","duration":"100.042434ms","start":"2025-10-25T08:38:03.044071Z","end":"2025-10-25T08:38:03.144113Z","steps":["trace[407230028] 'process raft request'  (duration: 99.931775ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:47:21.691555Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1138}
	{"level":"info","ts":"2025-10-25T08:47:21.712834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1138,"took":"20.905008ms","hash":918150463,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-25T08:47:21.712881Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":918150463,"revision":1138,"compact-revision":-1}
	
	
	==> kernel <==
	 08:48:04 up 30 min,  0 user,  load average: 0.35, 0.22, 0.29
	Linux functional-734361 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0917c9863eb7a51e494e92b4d3ce505f219ab17054485d0cbecba2d23bc782ad] <==
	I1025 08:46:01.814608       1 main.go:301] handling current node
	I1025 08:46:11.814802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:46:11.814845       1 main.go:301] handling current node
	I1025 08:46:21.814686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:46:21.814727       1 main.go:301] handling current node
	I1025 08:46:31.819735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:46:31.819788       1 main.go:301] handling current node
	I1025 08:46:41.815034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:46:41.815073       1 main.go:301] handling current node
	I1025 08:46:51.819806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:46:51.819843       1 main.go:301] handling current node
	I1025 08:47:01.823882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:47:01.823915       1 main.go:301] handling current node
	I1025 08:47:11.815133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:47:11.815168       1 main.go:301] handling current node
	I1025 08:47:21.815040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:47:21.815078       1 main.go:301] handling current node
	I1025 08:47:31.815018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:47:31.815057       1 main.go:301] handling current node
	I1025 08:47:41.823427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:47:41.823474       1 main.go:301] handling current node
	I1025 08:47:51.815090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:47:51.815132       1 main.go:301] handling current node
	I1025 08:48:01.814691       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:48:01.814742       1 main.go:301] handling current node
	
	
	==> kindnet [41ae60d9ff41be6be60b4718be563f5f60b27ba451cf7f00844c17b7933a2b68] <==
	I1025 08:36:25.073767       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 08:36:25.074027       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 08:36:25.074182       1 main.go:148] setting mtu 1500 for CNI 
	I1025 08:36:25.074201       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 08:36:25.074218       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T08:36:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 08:36:25.365140       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 08:36:25.365201       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 08:36:25.365216       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 08:36:25.365376       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 08:36:25.765456       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 08:36:25.765491       1 metrics.go:72] Registering metrics
	I1025 08:36:25.765554       1 controller.go:711] "Syncing nftables rules"
	I1025 08:36:35.276958       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:36:35.277037       1 main.go:301] handling current node
	I1025 08:36:45.284720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:36:45.284754       1 main.go:301] handling current node
	I1025 08:36:55.277722       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:36:55.277769       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d9436faf906dae359faeb1a5b1a2eaa7c8240f9bc8f272688aa7525e6779c738] <==
	I1025 08:37:22.682175       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 08:37:23.533750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 08:37:23.585282       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1025 08:37:23.741151       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1025 08:37:23.742247       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 08:37:23.745993       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 08:37:24.315729       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 08:37:24.411046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 08:37:24.473702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 08:37:24.479336       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 08:37:38.578560       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.227.54"}
	I1025 08:37:42.643330       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 08:37:42.744228       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.79.130"}
	I1025 08:37:45.942937       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 08:37:46.070141       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.45.74"}
	I1025 08:37:46.084430       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.88.211"}
	I1025 08:37:52.836044       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.135.120"}
	I1025 08:37:56.640378       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.99.215"}
	E1025 08:38:01.692154       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60030: use of closed network connection
	I1025 08:38:02.192812       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.117.187"}
	E1025 08:38:08.802829       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60286: use of closed network connection
	E1025 08:38:10.034473       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60296: use of closed network connection
	E1025 08:38:10.868608       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60314: use of closed network connection
	E1025 08:38:11.836295       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60340: use of closed network connection
	I1025 08:47:22.584708       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [11e8180c19ee44a7ea7742c9bd5c1c7e9b6018e53ea9494ca18211921228489a] <==
	I1025 08:37:25.983934       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 08:37:25.983920       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 08:37:25.984888       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 08:37:25.984910       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:37:25.984926       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 08:37:25.984955       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 08:37:25.985997       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 08:37:25.987094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 08:37:25.987378       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 08:37:25.988135       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:37:25.990323       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 08:37:25.991486       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 08:37:25.994758       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 08:37:25.994790       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 08:37:25.995939       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 08:37:25.996008       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 08:37:25.999166       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 08:37:26.005478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 08:37:45.992871       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 08:37:46.000231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 08:37:46.003310       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 08:37:46.005853       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 08:37:46.011227       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 08:37:46.016440       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 08:37:46.016517       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [806125129e3fee27806122005ef1878f45356358a324fc9f2ec81ccd9b864878] <==
	I1025 08:36:23.644591       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 08:36:23.644635       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 08:36:23.644663       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 08:36:23.644721       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 08:36:23.644815       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-734361"
	I1025 08:36:23.644874       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 08:36:23.645870       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 08:36:23.645925       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 08:36:23.645936       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:36:23.645949       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 08:36:23.645954       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 08:36:23.645985       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 08:36:23.645989       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 08:36:23.646012       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 08:36:23.646027       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 08:36:23.646401       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 08:36:23.646455       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 08:36:23.646488       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 08:36:23.648172       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 08:36:23.649551       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 08:36:23.650919       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:36:23.652009       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:36:23.664055       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 08:36:23.668301       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:36:38.646120       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0bff7765f8633a3446efac09f5156170d2e104c39100359e883367629ba68ecd] <==
	I1025 08:37:01.452023       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:37:01.514771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:37:01.615437       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:37:01.615479       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:37:01.615583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:37:01.634285       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:37:01.634332       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:37:01.640065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:37:01.640367       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:37:01.640403       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:37:01.641580       1 config.go:200] "Starting service config controller"
	I1025 08:37:01.641603       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:37:01.641661       1 config.go:309] "Starting node config controller"
	I1025 08:37:01.641667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:37:01.641675       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:37:01.641690       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:37:01.641695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:37:01.641794       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:37:01.641833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:37:01.742473       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 08:37:01.742548       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:37:01.742551       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [2d2a4992b42ae444cd0061f2b478999e86d704b299a3f7dc5fc884104ebd0b48] <==
	I1025 08:36:24.918954       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:36:24.985455       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:36:25.086824       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:36:25.086875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:36:25.086985       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:36:25.109235       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:36:25.109300       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:36:25.116026       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:36:25.116419       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:36:25.116447       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:36:25.118068       1 config.go:200] "Starting service config controller"
	I1025 08:36:25.118108       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:36:25.118110       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:36:25.118142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:36:25.118170       1 config.go:309] "Starting node config controller"
	I1025 08:36:25.118180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:36:25.118187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:36:25.118219       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:36:25.118225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:36:25.219190       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 08:36:25.219250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:36:25.219254       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e8efcf428f7bd9ddd332937f7cbf1818deeb1afc7202c6cb14e852c2b04e3549] <==
	I1025 08:37:21.674347       1 serving.go:386] Generated self-signed cert in-memory
	W1025 08:37:22.579494       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 08:37:22.579527       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 08:37:22.579540       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 08:37:22.579549       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 08:37:22.594271       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 08:37:22.594298       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:37:22.596386       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:37:22.596421       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:37:22.596698       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 08:37:22.596772       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 08:37:22.696928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [fc80ac041d8643fb95ffcb63c592f745913db5ac5d8de4e376772625fba4007f] <==
	E1025 08:36:16.653566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:36:16.653586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:36:16.653692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:36:16.653692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:36:16.653762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:36:16.653775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:36:16.653844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:36:17.475657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:36:17.496107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:36:17.556657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:36:17.605852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:36:17.615044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:36:17.761350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:36:17.813803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:36:17.813803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:36:17.825120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:36:17.874532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:36:17.910693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1025 08:36:18.251236       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:37:18.176545       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:37:18.176634       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 08:37:18.176633       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 08:37:18.176682       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 08:37:18.176695       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 08:37:18.176713       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 08:45:27 functional-734361 kubelet[4276]: E1025 08:45:27.464294    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:45:36 functional-734361 kubelet[4276]: E1025 08:45:36.463576    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:45:42 functional-734361 kubelet[4276]: E1025 08:45:42.463876    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:45:47 functional-734361 kubelet[4276]: E1025 08:45:47.464344    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:45:56 functional-734361 kubelet[4276]: E1025 08:45:56.464189    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:46:00 functional-734361 kubelet[4276]: E1025 08:46:00.465081    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:46:08 functional-734361 kubelet[4276]: E1025 08:46:08.464022    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:46:12 functional-734361 kubelet[4276]: E1025 08:46:12.464384    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:46:21 functional-734361 kubelet[4276]: E1025 08:46:21.463708    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:46:27 functional-734361 kubelet[4276]: E1025 08:46:27.464147    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:46:32 functional-734361 kubelet[4276]: E1025 08:46:32.463690    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:46:41 functional-734361 kubelet[4276]: E1025 08:46:41.464108    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:46:46 functional-734361 kubelet[4276]: E1025 08:46:46.464247    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:46:52 functional-734361 kubelet[4276]: E1025 08:46:52.464039    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:46:57 functional-734361 kubelet[4276]: E1025 08:46:57.463883    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:47:03 functional-734361 kubelet[4276]: E1025 08:47:03.463542    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:47:10 functional-734361 kubelet[4276]: E1025 08:47:10.464694    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:47:18 functional-734361 kubelet[4276]: E1025 08:47:18.463793    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:47:24 functional-734361 kubelet[4276]: E1025 08:47:24.464021    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:47:31 functional-734361 kubelet[4276]: E1025 08:47:31.464082    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:47:35 functional-734361 kubelet[4276]: E1025 08:47:35.463939    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:47:46 functional-734361 kubelet[4276]: E1025 08:47:46.464539    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:47:49 functional-734361 kubelet[4276]: E1025 08:47:49.463562    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	Oct 25 08:47:59 functional-734361 kubelet[4276]: E1025 08:47:59.463430    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-x4zb8" podUID="e0654921-2ea8-4fb6-8d7a-77a1b9c744d4"
	Oct 25 08:48:02 functional-734361 kubelet[4276]: E1025 08:48:02.466440    4276 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qzsxp" podUID="71844d9a-bcef-491d-ac1d-1d8b1532e70e"
	
	
	==> kubernetes-dashboard [74665363d81daf9b466b2a4e93ff9f3b3c6b33a01f14750fe7a8a8877b7e10e2] <==
	2025/10/25 08:37:49 Starting overwatch
	2025/10/25 08:37:49 Using namespace: kubernetes-dashboard
	2025/10/25 08:37:49 Using in-cluster config to connect to apiserver
	2025/10/25 08:37:49 Using secret token for csrf signing
	2025/10/25 08:37:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 08:37:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 08:37:49 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 08:37:49 Generating JWE encryption key
	2025/10/25 08:37:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 08:37:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 08:37:49 Initializing JWE encryption key from synchronized object
	2025/10/25 08:37:49 Creating in-cluster Sidecar client
	2025/10/25 08:37:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 08:37:49 Serving insecurely on HTTP port: 9090
	2025/10/25 08:38:19 Successful request to sidecar
	
	
	==> storage-provisioner [71d256fbaa0f93f848a32c302db2693e626c3c6676e47ada461e7b482271e2b9] <==
	I1025 08:36:36.037701       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-734361_57a7d191-3255-4ed5-99cb-7bb562fda563!
	W1025 08:36:37.946430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:37.950198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:39.953415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:39.957826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:41.960702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:41.966124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:43.969574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:43.973500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:45.977322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:45.982921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:47.985909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:47.989831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:49.993148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:49.997063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:51.999836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:52.006917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:54.010858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:54.015360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:56.018687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:56.022524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:58.025313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:58.029243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:37:00.032168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:37:00.036088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [80336c9a286dbc7f58c207202a724c6a1a759c7934a2537928a3476cd7221d76] <==
	W1025 08:47:39.630214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:41.633915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:41.638864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:43.641799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:43.647045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:45.649576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:45.654177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:47.657095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:47.663754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:49.666770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:49.671781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:51.675062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:51.678945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:53.682855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:53.686753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:55.690017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:55.694690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:57.698477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:57.704222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:59.707313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:47:59.711533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:48:01.714537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:48:01.718190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:48:03.722167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:48:03.726458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-734361 -n functional-734361
helpers_test.go:269: (dbg) Run:  kubectl --context functional-734361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-qzsxp hello-node-connect-7d85dfc575-x4zb8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-734361 describe pod busybox-mount hello-node-75c85bcc94-qzsxp hello-node-connect-7d85dfc575-x4zb8
helpers_test.go:290: (dbg) kubectl --context functional-734361 describe pod busybox-mount hello-node-75c85bcc94-qzsxp hello-node-connect-7d85dfc575-x4zb8:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-734361/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 08:37:45 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://9fc2f4a6d3157aed75364dab28aace5618321ebb7c5eb80bc33479bae23cd081
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 08:37:46 +0000
	      Finished:     Sat, 25 Oct 2025 08:37:46 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxcc2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hxcc2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-734361
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 762ms (762ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-qzsxp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-734361/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 08:37:42 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5m6fs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5m6fs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qzsxp to functional-734361
	  Normal   Pulling    7m21s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m21s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m21s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    15s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-x4zb8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-734361/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 08:38:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lmcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8lmcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-x4zb8 to functional-734361
	  Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-734361 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-734361 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qzsxp" [71844d9a-bcef-491d-ac1d-1d8b1532e70e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-734361 -n functional-734361
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-25 08:47:43.083827555 +0000 UTC m=+1101.555641338
functional_test.go:1460: (dbg) Run:  kubectl --context functional-734361 describe po hello-node-75c85bcc94-qzsxp -n default
functional_test.go:1460: (dbg) kubectl --context functional-734361 describe po hello-node-75c85bcc94-qzsxp -n default:
Name:             hello-node-75c85bcc94-qzsxp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-734361/192.168.49.2
Start Time:       Sat, 25 Oct 2025 08:37:42 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5m6fs (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5m6fs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qzsxp to functional-734361
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-734361 logs hello-node-75c85bcc94-qzsxp -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-734361 logs hello-node-75c85bcc94-qzsxp -n default: exit status 1 (67.926912ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-qzsxp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-734361 logs hello-node-75c85bcc94-qzsxp -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image load --daemon kicbase/echo-server:functional-734361 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-734361" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image load --daemon kicbase/echo-server:functional-734361 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-734361" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-734361
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image load --daemon kicbase/echo-server:functional-734361 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-734361" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image save kicbase/echo-server:functional-734361 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 08:38:15.789599   49508 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:38:15.789889   49508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:38:15.789898   49508 out.go:374] Setting ErrFile to fd 2...
	I1025 08:38:15.789902   49508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:38:15.790074   49508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:38:15.790683   49508 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:38:15.790784   49508 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:38:15.791110   49508 cli_runner.go:164] Run: docker container inspect functional-734361 --format={{.State.Status}}
	I1025 08:38:15.808889   49508 ssh_runner.go:195] Run: systemctl --version
	I1025 08:38:15.808945   49508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734361
	I1025 08:38:15.825571   49508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/functional-734361/id_rsa Username:docker}
	I1025 08:38:15.921991   49508 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1025 08:38:15.922071   49508 cache_images.go:254] Failed to load cached images for "functional-734361": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1025 08:38:15.922095   49508 cache_images.go:266] failed pushing to: functional-734361

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-734361
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image save --daemon kicbase/echo-server:functional-734361 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-734361
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-734361: exit status 1 (17.784424ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-734361

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-734361

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 service --namespace=default --https --url hello-node: exit status 115 (543.739512ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30868
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-734361 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 service hello-node --url --format={{.IP}}: exit status 115 (538.988376ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-734361 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 service hello-node --url: exit status 115 (541.630354ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30868
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-734361 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30868
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-550540 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-550540 --output=json --user=testUser: exit status 80 (1.752884556s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30278eb2-20f0-4646-8f68-07f59aca224a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-550540 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"dd4fa10c-2a13-40bd-afff-be74d40a46a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T08:56:57Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c8b9a138-3208-49c0-b652-a5e5ab9f4261","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-550540 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.97s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-550540 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-550540 --output=json --user=testUser: exit status 80 (1.974525812s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fc13348-4d26-46e9-8e64-ef729288669c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-550540 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"eb9a79aa-a58d-471b-a0b0-e7f45607e2ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T08:56:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6ac52987-3d65-430a-ba54-2a4dd94938e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-550540 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.97s)

                                                
                                    
x
+
TestPause/serial/Pause (9.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-613858 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-613858 --alsologtostderr -v=5: exit status 80 (2.065552939s)

                                                
                                                
-- stdout --
	* Pausing node pause-613858 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:09:20.447265  193645 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:09:20.447509  193645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:20.447518  193645 out.go:374] Setting ErrFile to fd 2...
	I1025 09:09:20.447521  193645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:20.447766  193645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:09:20.447999  193645 out.go:368] Setting JSON to false
	I1025 09:09:20.448040  193645 mustload.go:65] Loading cluster: pause-613858
	I1025 09:09:20.448390  193645 config.go:182] Loaded profile config "pause-613858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:09:20.448811  193645 cli_runner.go:164] Run: docker container inspect pause-613858 --format={{.State.Status}}
	I1025 09:09:20.474440  193645 host.go:66] Checking if "pause-613858" exists ...
	I1025 09:09:20.474791  193645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:09:20.558363  193645 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-25 09:09:20.541867674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:09:20.559200  193645 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-613858 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:09:20.560886  193645 out.go:179] * Pausing node pause-613858 ... 
	I1025 09:09:20.562152  193645 host.go:66] Checking if "pause-613858" exists ...
	I1025 09:09:20.562465  193645 ssh_runner.go:195] Run: systemctl --version
	I1025 09:09:20.562505  193645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-613858
	I1025 09:09:20.588741  193645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/pause-613858/id_rsa Username:docker}
	I1025 09:09:20.695396  193645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:09:20.710629  193645 pause.go:52] kubelet running: true
	I1025 09:09:20.710707  193645 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:09:20.879303  193645 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:09:20.879411  193645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:09:20.955291  193645 cri.go:89] found id: "0fa72110f5f6b461764369463c4269ad3d2ccf4ec126b5d00e00d5176ba43d08"
	I1025 09:09:20.955340  193645 cri.go:89] found id: "83593fd8cdbbfd2476b535cc8cf1fd2c51d1c5678c0f973463f66b6f3d3bc667"
	I1025 09:09:20.955347  193645 cri.go:89] found id: "b0cfe0834184d1e241169faa4b377eb51017495feda9e43ce723b74b50175435"
	I1025 09:09:20.955355  193645 cri.go:89] found id: "c7886d58cffc0a0ecd272461f1ea32ee36900dc767b2a938e56c9dd72bf6c45c"
	I1025 09:09:20.955359  193645 cri.go:89] found id: "7bacff63c1379282854265e0ac2d1d15217ce38911df760f4ce89456b6c21b75"
	I1025 09:09:20.955364  193645 cri.go:89] found id: "e5cf81d0eb29ab58e21cabeccffa5d5469ee3b5aacfbd1d6280da96db059eb2a"
	I1025 09:09:20.955368  193645 cri.go:89] found id: "8b7f648a973a1ff61e663028b204c0704fee7991a50321618b6fc19b83936f4d"
	I1025 09:09:20.955372  193645 cri.go:89] found id: ""
	I1025 09:09:20.955417  193645 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:09:20.969461  193645 retry.go:31] will retry after 360.991799ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:09:20Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:09:21.330892  193645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:09:21.363749  193645 pause.go:52] kubelet running: false
	I1025 09:09:21.363815  193645 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:09:21.507854  193645 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:09:21.507983  193645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:09:21.592564  193645 cri.go:89] found id: "0fa72110f5f6b461764369463c4269ad3d2ccf4ec126b5d00e00d5176ba43d08"
	I1025 09:09:21.592592  193645 cri.go:89] found id: "83593fd8cdbbfd2476b535cc8cf1fd2c51d1c5678c0f973463f66b6f3d3bc667"
	I1025 09:09:21.592597  193645 cri.go:89] found id: "b0cfe0834184d1e241169faa4b377eb51017495feda9e43ce723b74b50175435"
	I1025 09:09:21.592603  193645 cri.go:89] found id: "c7886d58cffc0a0ecd272461f1ea32ee36900dc767b2a938e56c9dd72bf6c45c"
	I1025 09:09:21.592606  193645 cri.go:89] found id: "7bacff63c1379282854265e0ac2d1d15217ce38911df760f4ce89456b6c21b75"
	I1025 09:09:21.592611  193645 cri.go:89] found id: "e5cf81d0eb29ab58e21cabeccffa5d5469ee3b5aacfbd1d6280da96db059eb2a"
	I1025 09:09:21.592615  193645 cri.go:89] found id: "8b7f648a973a1ff61e663028b204c0704fee7991a50321618b6fc19b83936f4d"
	I1025 09:09:21.592619  193645 cri.go:89] found id: ""
	I1025 09:09:21.592680  193645 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:09:21.607592  193645 retry.go:31] will retry after 513.309833ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:09:21Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:09:22.122200  193645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:09:22.142429  193645 pause.go:52] kubelet running: false
	I1025 09:09:22.142493  193645 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:09:22.321549  193645 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:09:22.321768  193645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:09:22.402609  193645 cri.go:89] found id: "0fa72110f5f6b461764369463c4269ad3d2ccf4ec126b5d00e00d5176ba43d08"
	I1025 09:09:22.402634  193645 cri.go:89] found id: "83593fd8cdbbfd2476b535cc8cf1fd2c51d1c5678c0f973463f66b6f3d3bc667"
	I1025 09:09:22.402653  193645 cri.go:89] found id: "b0cfe0834184d1e241169faa4b377eb51017495feda9e43ce723b74b50175435"
	I1025 09:09:22.402658  193645 cri.go:89] found id: "c7886d58cffc0a0ecd272461f1ea32ee36900dc767b2a938e56c9dd72bf6c45c"
	I1025 09:09:22.402662  193645 cri.go:89] found id: "7bacff63c1379282854265e0ac2d1d15217ce38911df760f4ce89456b6c21b75"
	I1025 09:09:22.402666  193645 cri.go:89] found id: "e5cf81d0eb29ab58e21cabeccffa5d5469ee3b5aacfbd1d6280da96db059eb2a"
	I1025 09:09:22.402670  193645 cri.go:89] found id: "8b7f648a973a1ff61e663028b204c0704fee7991a50321618b6fc19b83936f4d"
	I1025 09:09:22.402674  193645 cri.go:89] found id: ""
	I1025 09:09:22.402719  193645 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:09:22.417819  193645 out.go:203] 
	W1025 09:09:22.419236  193645 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:09:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:09:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:09:22.419255  193645 out.go:285] * 
	* 
	W1025 09:09:22.423144  193645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:09:22.424777  193645 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-613858 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-613858
helpers_test.go:243: (dbg) docker inspect pause-613858:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535",
	        "Created": "2025-10-25T09:08:36.465206893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180030,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:08:36.512442935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/hostname",
	        "HostsPath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/hosts",
	        "LogPath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535-json.log",
	        "Name": "/pause-613858",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-613858:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-613858",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535",
	                "LowerDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-613858",
	                "Source": "/var/lib/docker/volumes/pause-613858/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-613858",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-613858",
	                "name.minikube.sigs.k8s.io": "pause-613858",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "46dd6e693472d68187b0c27142d3e44146bb5716035491718b01916a2bb80118",
	            "SandboxKey": "/var/run/docker/netns/46dd6e693472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-613858": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:3d:b1:78:7d:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34f2c802e492bddede425726b810edc9d256626ad3c21a8a86e5f40ac78530c1",
	                    "EndpointID": "a17e9eccf008026d3f53656dda89d89c351727b733cf0c0874c2d1cd0faba1ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-613858",
	                        "18bb7d3289f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-613858 -n pause-613858
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-613858 -n pause-613858: exit status 2 (397.015198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-613858 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-613858 logs -n 25: (4.852367274s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --cancel-scheduled                                                                                 │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │ 25 Oct 25 09:06 UTC │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:07 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:07 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:07 UTC │ 25 Oct 25 09:07 UTC │
	│ delete  │ -p scheduled-stop-344499                                                                                                    │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ start   │ -p insufficient-storage-791576 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-791576 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ delete  │ -p insufficient-storage-791576                                                                                              │ insufficient-storage-791576 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ start   │ -p offline-crio-559981 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-559981         │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p pause-613858 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-613858                │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p NoKubernetes-629442 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ start   │ -p NoKubernetes-629442 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ start   │ -p stopped-upgrade-626100 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-626100      │ jenkins │ v1.32.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ stop    │ stopped-upgrade-626100 stop                                                                                                 │ stopped-upgrade-626100      │ jenkins │ v1.32.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p stopped-upgrade-626100 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-626100      │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p pause-613858 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-613858                │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ delete  │ -p stopped-upgrade-626100                                                                                                   │ stopped-upgrade-626100      │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ delete  │ -p offline-crio-559981                                                                                                      │ offline-crio-559981         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p force-systemd-flag-742570 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-742570   │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ pause   │ -p pause-613858 --alsologtostderr -v=5                                                                                      │ pause-613858                │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ start   │ -p running-upgrade-462303 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ running-upgrade-462303      │ jenkins │ v1.32.0 │ 25 Oct 25 09:09 UTC │                     │
	│ delete  │ -p NoKubernetes-629442                                                                                                      │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:09:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:09:20.715546  193884 out.go:296] Setting OutFile to fd 1 ...
	I1025 09:09:20.715980  193884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:20.715985  193884 out.go:309] Setting ErrFile to fd 2...
	I1025 09:09:20.715990  193884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:20.716312  193884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:09:20.717075  193884 out.go:303] Setting JSON to false
	I1025 09:09:20.718320  193884 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3109,"bootTime":1761380252,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:09:20.718412  193884 start.go:138] virtualization: kvm guest
	I1025 09:09:20.720531  193884 out.go:177] * [running-upgrade-462303] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:09:20.722481  193884 out.go:177]   - MINIKUBE_LOCATION=21796
	I1025 09:09:20.722547  193884 notify.go:220] Checking for updates...
	I1025 09:09:20.723972  193884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:09:20.725484  193884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:09:20.726806  193884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:09:20.727948  193884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:09:20.729124  193884 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1053519307
	I1025 09:09:20.731129  193884 config.go:182] Loaded profile config "NoKubernetes-629442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:09:20.731280  193884 config.go:182] Loaded profile config "force-systemd-flag-742570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:09:20.731480  193884 config.go:182] Loaded profile config "pause-613858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:09:20.731707  193884 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 09:09:20.771487  193884 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:09:20.771603  193884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:09:20.844783  193884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-25 09:09:20.832345705 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:09:20.844953  193884 docker.go:295] overlay module found
	I1025 09:09:20.915860  193884 out.go:177] * Using the docker driver based on user configuration
	I1025 09:09:20.992696  193884 start.go:298] selected driver: docker
	I1025 09:09:20.992711  193884 start.go:902] validating driver "docker" against <nil>
	I1025 09:09:20.992728  193884 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:09:20.994049  193884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:09:21.058971  193884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-25 09:09:21.048493249 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:09:21.059136  193884 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 09:09:21.059323  193884 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:09:21.062861  193884 out.go:177] * Using Docker driver with root privileges
	I1025 09:09:21.064197  193884 cni.go:84] Creating CNI manager for ""
	I1025 09:09:21.064211  193884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:09:21.064223  193884 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:09:21.064238  193884 start_flags.go:323] config:
	{Name:running-upgrade-462303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-462303 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 09:09:21.065880  193884 out.go:177] * Starting control plane node running-upgrade-462303 in cluster running-upgrade-462303
	I1025 09:09:21.067358  193884 cache.go:121] Beginning downloading kic base image for docker with crio
	I1025 09:09:21.068816  193884 out.go:177] * Pulling base image ...
	I1025 09:09:21.070282  193884 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 09:09:21.070326  193884 preload.go:148] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 09:09:21.070341  193884 cache.go:56] Caching tarball of preloaded images
	I1025 09:09:21.070386  193884 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1025 09:09:21.070446  193884 preload.go:174] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:09:21.070455  193884 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1025 09:09:21.070609  193884 profile.go:148] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/running-upgrade-462303/config.json ...
	I1025 09:09:21.070632  193884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/running-upgrade-462303/config.json: {Name:mkab0d079f49cc22c32a61baf0f2a228753a5bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:09:21.097152  193884 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1025 09:09:21.097185  193884 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1025 09:09:21.097204  193884 cache.go:194] Successfully downloaded all kic artifacts
	I1025 09:09:21.097247  193884 start.go:365] acquiring machines lock for running-upgrade-462303: {Name:mke270028345595291fadfd402de715eedd1e6d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:09:21.097338  193884 start.go:369] acquired machines lock for "running-upgrade-462303" in 74.922µs
	I1025 09:09:21.097356  193884 start.go:93] Provisioning new machine with config: &{Name:running-upgrade-462303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-462303 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:09:21.097446  193884 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:09:20.671293  188083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:09:20.717983  188083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:09:20.723931  188083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:09:20.724017  188083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:09:20.734615  188083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:09:20.734660  188083 start.go:495] detecting cgroup driver to use...
	I1025 09:09:20.734692  188083 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:09:20.734737  188083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:09:20.758056  188083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:09:20.774738  188083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:09:20.774806  188083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:09:20.796339  188083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:09:20.815729  188083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:09:20.920899  188083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:09:21.018913  188083 docker.go:234] disabling docker service ...
	I1025 09:09:21.018981  188083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:09:21.038700  188083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:09:21.054727  188083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:09:21.164049  188083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:09:21.277168  188083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:09:21.291536  188083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:09:21.311007  188083 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1025 09:09:21.563138  188083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 09:09:21.563201  188083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:09:21.580808  188083 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:09:21.580880  188083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:09:21.593710  188083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:09:21.605117  188083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:09:21.616248  188083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:09:21.627147  188083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:09:21.636956  188083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:09:21.648089  188083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:09:21.747630  188083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:09:22.054262  188083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:09:22.054317  188083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:09:22.058401  188083 start.go:563] Will wait 60s for crictl version
	I1025 09:09:22.058464  188083 ssh_runner.go:195] Run: which crictl
	I1025 09:09:22.062110  188083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:09:22.086220  188083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:09:22.086316  188083 ssh_runner.go:195] Run: crio --version
	I1025 09:09:22.118734  188083 ssh_runner.go:195] Run: crio --version
	I1025 09:09:22.154780  188083 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1025 09:09:22.162131  188083 ssh_runner.go:195] Run: rm -f paused
	I1025 09:09:22.169323  188083 out.go:179] * Done! minikube is ready without Kubernetes!
	I1025 09:09:22.172374  188083 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.254755611Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.255544697Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.255569227Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.255601604Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.25647044Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.256494945Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.260565004Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.260598298Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.261355099Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.261871614Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.261939846Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.267732179Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.312563228Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-5mvjc Namespace:kube-system ID:f0eb798730c40a61a4f07194aaf11290015229cafb58febf0fc75fd8cf13cd1e UID:ab423220-2a94-4f0c-9626-5f3151f00e87 NetNS:/var/run/netns/3826a89c-fc54-4382-9773-960373378bd3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000528008}] Aliases:map[]}"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.312796487Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-5mvjc for CNI network kindnet (type=ptp)"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313224772Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313257689Z" level=info msg="Starting seccomp notifier watcher"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313321324Z" level=info msg="Create NRI interface"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313431083Z" level=info msg="built-in NRI default validator is disabled"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313445745Z" level=info msg="runtime interface created"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313460165Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313475087Z" level=info msg="runtime interface starting up..."
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313482438Z" level=info msg="starting plugins..."
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313496886Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313988652Z" level=info msg="No systemd watchdog enabled"
	Oct 25 09:09:17 pause-613858 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0fa72110f5f6b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   f0eb798730c40       coredns-66bc5c9577-5mvjc               kube-system
	83593fd8cdbbf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   ad62f6a61a17b       kube-proxy-4n9sk                       kube-system
	b0cfe0834184d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   1d6a6432cc1cc       kindnet-vcf92                          kube-system
	c7886d58cffc0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   095ba79f2af32       etcd-pause-613858                      kube-system
	7bacff63c1379       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   cbbc2d7d8552b       kube-controller-manager-pause-613858   kube-system
	e5cf81d0eb29a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   7251f06ff8ebc       kube-apiserver-pause-613858            kube-system
	8b7f648a973a1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   4c5017a8469c1       kube-scheduler-pause-613858            kube-system
	
	
	==> coredns [0fa72110f5f6b461764369463c4269ad3d2ccf4ec126b5d00e00d5176ba43d08] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60887 - 27879 "HINFO IN 7151933186248109070.5145578864095086868. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091127584s
	
	
	==> describe nodes <==
	Name:               pause-613858
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-613858
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=pause-613858
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_08_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:08:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-613858
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:09:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-613858
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f342c96d-bde0-473c-872a-568e575eb88b
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5mvjc                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-613858                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-vcf92                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-613858             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-613858    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-4n9sk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-613858             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-613858 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-613858 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-613858 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-613858 event: Registered Node pause-613858 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-613858 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [c7886d58cffc0a0ecd272461f1ea32ee36900dc767b2a938e56c9dd72bf6c45c] <==
	{"level":"warn","ts":"2025-10-25T09:08:51.234178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.244017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.251033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.258071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.265072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.273054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.280765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.288760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.297302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.308854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.316186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.323900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.331083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.339777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.347678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.355686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.363509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.369961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.376085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.384058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.399907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.404247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.412054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.419556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.473711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54414","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:25 up 51 min,  0 user,  load average: 4.62, 2.25, 1.47
	Linux pause-613858 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b0cfe0834184d1e241169faa4b377eb51017495feda9e43ce723b74b50175435] <==
	I1025 09:09:00.803216       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:09:00.803451       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:09:00.803618       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:09:00.803687       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:09:00.803712       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:09:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:09:01.103918       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:09:01.103960       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:09:01.103974       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:09:01.104120       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:09:01.504216       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:09:01.504239       1 metrics.go:72] Registering metrics
	I1025 09:09:01.504301       1 controller.go:711] "Syncing nftables rules"
	I1025 09:09:11.007256       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:09:11.007338       1 main.go:301] handling current node
	I1025 09:09:21.013736       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:09:21.013769       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5cf81d0eb29ab58e21cabeccffa5d5469ee3b5aacfbd1d6280da96db059eb2a] <==
	I1025 09:08:52.267675       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:08:52.267714       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 09:08:52.294604       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:08:52.295342       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:08:52.299101       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:08:52.303460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:08:52.314733       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:08:52.324109       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:08:53.152127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:08:53.157291       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:08:53.157316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:08:54.005585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:08:54.062772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:08:54.149520       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:08:54.158753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:08:54.160338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:08:54.165892       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:08:54.231402       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:08:55.206170       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:08:55.221899       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:08:55.232108       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:08:59.337303       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:08:59.343130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:09:00.083260       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:09:00.183523       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7bacff63c1379282854265e0ac2d1d15217ce38911df760f4ce89456b6c21b75] <==
	I1025 09:08:59.254615       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:08:59.258144       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:08:59.261506       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-613858" podCIDRs=["10.244.0.0/24"]
	I1025 09:08:59.264507       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:08:59.267847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:08:59.279927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:08:59.279950       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:08:59.279958       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:08:59.280067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:08:59.280334       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:08:59.280601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:08:59.280621       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:08:59.280655       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:08:59.280819       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:08:59.280947       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:08:59.282039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:08:59.282070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:08:59.283328       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:08:59.283350       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:08:59.283330       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:08:59.285710       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:08:59.285857       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:08:59.285941       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-613858"
	I1025 09:08:59.285997       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:09:14.288124       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [83593fd8cdbbfd2476b535cc8cf1fd2c51d1c5678c0f973463f66b6f3d3bc667] <==
	I1025 09:09:00.615550       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:09:00.678847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:09:00.779338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:09:00.779368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:09:00.779472       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:09:00.800494       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:09:00.800547       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:09:00.806975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:09:00.807420       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:09:00.807508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:09:00.809333       1 config.go:200] "Starting service config controller"
	I1025 09:09:00.809354       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:09:00.809378       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:09:00.809383       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:09:00.809421       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:09:00.809436       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:09:00.809668       1 config.go:309] "Starting node config controller"
	I1025 09:09:00.809678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:09:00.809686       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:09:00.909540       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:09:00.909680       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:09:00.909711       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8b7f648a973a1ff61e663028b204c0704fee7991a50321618b6fc19b83936f4d] <==
	E1025 09:08:52.320539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:08:52.320637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:08:52.321100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:08:52.321357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:08:52.322456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:08:52.323467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:08:52.331264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:08:52.332199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:08:52.335624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:08:53.142911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:08:53.166276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:08:53.192290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:08:53.201137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:08:53.245368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:08:53.375157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:08:53.376869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:08:53.376879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:08:53.441003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:08:53.485595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:08:53.488848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:08:53.556835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:08:53.633828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:08:53.636809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:08:53.707161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1025 09:08:55.304629       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:08:56 pause-613858 kubelet[1286]: I1025 09:08:56.212949    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-613858" podStartSLOduration=1.212941123 podStartE2EDuration="1.212941123s" podCreationTimestamp="2025-10-25 09:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:08:56.200794018 +0000 UTC m=+1.210551909" watchObservedRunningTime="2025-10-25 09:08:56.212941123 +0000 UTC m=+1.222699053"
	Oct 25 09:08:59 pause-613858 kubelet[1286]: I1025 09:08:59.272048    1286 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:08:59 pause-613858 kubelet[1286]: I1025 09:08:59.272771    1286 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245538    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f26c494-986a-4ddb-96eb-342dac616a0c-lib-modules\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245586    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f70706f6-0293-43f0-b6ab-68ee5a45051b-kube-proxy\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245609    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f70706f6-0293-43f0-b6ab-68ee5a45051b-lib-modules\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245635    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f26c494-986a-4ddb-96eb-342dac616a0c-xtables-lock\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245677    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f70706f6-0293-43f0-b6ab-68ee5a45051b-xtables-lock\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245700    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88ns\" (UniqueName: \"kubernetes.io/projected/9f26c494-986a-4ddb-96eb-342dac616a0c-kube-api-access-q88ns\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245723    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9zf\" (UniqueName: \"kubernetes.io/projected/f70706f6-0293-43f0-b6ab-68ee5a45051b-kube-api-access-zq9zf\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245750    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9f26c494-986a-4ddb-96eb-342dac616a0c-cni-cfg\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:01 pause-613858 kubelet[1286]: I1025 09:09:01.184003    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vcf92" podStartSLOduration=1.183979767 podStartE2EDuration="1.183979767s" podCreationTimestamp="2025-10-25 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:09:01.183855661 +0000 UTC m=+6.193613557" watchObservedRunningTime="2025-10-25 09:09:01.183979767 +0000 UTC m=+6.193737662"
	Oct 25 09:09:01 pause-613858 kubelet[1286]: I1025 09:09:01.192485    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4n9sk" podStartSLOduration=1.192467137 podStartE2EDuration="1.192467137s" podCreationTimestamp="2025-10-25 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:09:01.192314993 +0000 UTC m=+6.202072889" watchObservedRunningTime="2025-10-25 09:09:01.192467137 +0000 UTC m=+6.202225032"
	Oct 25 09:09:11 pause-613858 kubelet[1286]: I1025 09:09:11.121607    1286 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:09:11 pause-613858 kubelet[1286]: I1025 09:09:11.226605    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljvc\" (UniqueName: \"kubernetes.io/projected/ab423220-2a94-4f0c-9626-5f3151f00e87-kube-api-access-jljvc\") pod \"coredns-66bc5c9577-5mvjc\" (UID: \"ab423220-2a94-4f0c-9626-5f3151f00e87\") " pod="kube-system/coredns-66bc5c9577-5mvjc"
	Oct 25 09:09:11 pause-613858 kubelet[1286]: I1025 09:09:11.226667    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab423220-2a94-4f0c-9626-5f3151f00e87-config-volume\") pod \"coredns-66bc5c9577-5mvjc\" (UID: \"ab423220-2a94-4f0c-9626-5f3151f00e87\") " pod="kube-system/coredns-66bc5c9577-5mvjc"
	Oct 25 09:09:12 pause-613858 kubelet[1286]: I1025 09:09:12.208740    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5mvjc" podStartSLOduration=12.208717674 podStartE2EDuration="12.208717674s" podCreationTimestamp="2025-10-25 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:09:12.20840202 +0000 UTC m=+17.218159915" watchObservedRunningTime="2025-10-25 09:09:12.208717674 +0000 UTC m=+17.218475569"
	Oct 25 09:09:17 pause-613858 kubelet[1286]: W1025 09:09:17.208016    1286 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 25 09:09:17 pause-613858 kubelet[1286]: E1025 09:09:17.208094    1286 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 25 09:09:17 pause-613858 kubelet[1286]: E1025 09:09:17.208138    1286 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 09:09:17 pause-613858 kubelet[1286]: E1025 09:09:17.208154    1286 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 09:09:20 pause-613858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:09:20 pause-613858 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:09:20 pause-613858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:09:20 pause-613858 systemd[1]: kubelet.service: Consumed 1.157s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-613858 -n pause-613858
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-613858 -n pause-613858: exit status 2 (397.873357ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-613858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-613858
helpers_test.go:243: (dbg) docker inspect pause-613858:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535",
	        "Created": "2025-10-25T09:08:36.465206893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180030,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:08:36.512442935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/hostname",
	        "HostsPath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/hosts",
	        "LogPath": "/var/lib/docker/containers/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535/18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535-json.log",
	        "Name": "/pause-613858",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-613858:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-613858",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18bb7d3289f899674f79e1c63bcfe974429bf4f059eeddce80bcb8544e008535",
	                "LowerDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8bbec8f502cfaef591aaa937654cc4c2842bf590329e50c8032e3cf255f3038/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-613858",
	                "Source": "/var/lib/docker/volumes/pause-613858/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-613858",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-613858",
	                "name.minikube.sigs.k8s.io": "pause-613858",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "46dd6e693472d68187b0c27142d3e44146bb5716035491718b01916a2bb80118",
	            "SandboxKey": "/var/run/docker/netns/46dd6e693472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-613858": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:3d:b1:78:7d:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34f2c802e492bddede425726b810edc9d256626ad3c21a8a86e5f40ac78530c1",
	                    "EndpointID": "a17e9eccf008026d3f53656dda89d89c351727b733cf0c0874c2d1cd0faba1ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-613858",
	                        "18bb7d3289f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-613858 -n pause-613858
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-613858 -n pause-613858: exit status 2 (510.163459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-613858 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-613858 logs -n 25: (1.106775348s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --cancel-scheduled                                                                                 │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:06 UTC │ 25 Oct 25 09:06 UTC │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:07 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:07 UTC │                     │
	│ stop    │ -p scheduled-stop-344499 --schedule 15s                                                                                     │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:07 UTC │ 25 Oct 25 09:07 UTC │
	│ delete  │ -p scheduled-stop-344499                                                                                                    │ scheduled-stop-344499       │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ start   │ -p insufficient-storage-791576 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-791576 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ delete  │ -p insufficient-storage-791576                                                                                              │ insufficient-storage-791576 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ start   │ -p offline-crio-559981 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-559981         │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p pause-613858 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-613858                │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p NoKubernetes-629442 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ start   │ -p NoKubernetes-629442 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ start   │ -p stopped-upgrade-626100 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-626100      │ jenkins │ v1.32.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ stop    │ stopped-upgrade-626100 stop                                                                                                 │ stopped-upgrade-626100      │ jenkins │ v1.32.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p stopped-upgrade-626100 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-626100      │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p pause-613858 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-613858                │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ delete  │ -p stopped-upgrade-626100                                                                                                   │ stopped-upgrade-626100      │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ delete  │ -p offline-crio-559981                                                                                                      │ offline-crio-559981         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p force-systemd-flag-742570 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-742570   │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ pause   │ -p pause-613858 --alsologtostderr -v=5                                                                                      │ pause-613858                │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ start   │ -p running-upgrade-462303 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ running-upgrade-462303      │ jenkins │ v1.32.0 │ 25 Oct 25 09:09 UTC │                     │
	│ delete  │ -p NoKubernetes-629442                                                                                                      │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-629442         │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:09:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:09:28.785110  197224 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:09:28.785441  197224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:28.785454  197224 out.go:374] Setting ErrFile to fd 2...
	I1025 09:09:28.785459  197224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:28.785782  197224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:09:28.786467  197224 out.go:368] Setting JSON to false
	I1025 09:09:28.787862  197224 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3117,"bootTime":1761380252,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:09:28.787984  197224 start.go:141] virtualization: kvm guest
	I1025 09:09:28.790622  197224 out.go:179] * [NoKubernetes-629442] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:09:28.792595  197224 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:09:28.792602  197224 notify.go:220] Checking for updates...
	I1025 09:09:28.794032  197224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:09:28.796045  197224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:09:28.798505  197224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:09:28.802175  197224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:09:28.803814  197224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.254755611Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.255544697Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.255569227Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.255601604Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.25647044Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.256494945Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.260565004Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.260598298Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.261355099Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.261871614Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.261939846Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.267732179Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.312563228Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-5mvjc Namespace:kube-system ID:f0eb798730c40a61a4f07194aaf11290015229cafb58febf0fc75fd8cf13cd1e UID:ab423220-2a94-4f0c-9626-5f3151f00e87 NetNS:/var/run/netns/3826a89c-fc54-4382-9773-960373378bd3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000528008}] Aliases:map[]}"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.312796487Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-5mvjc for CNI network kindnet (type=ptp)"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313224772Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313257689Z" level=info msg="Starting seccomp notifier watcher"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313321324Z" level=info msg="Create NRI interface"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313431083Z" level=info msg="built-in NRI default validator is disabled"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313445745Z" level=info msg="runtime interface created"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313460165Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313475087Z" level=info msg="runtime interface starting up..."
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313482438Z" level=info msg="starting plugins..."
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313496886Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 25 09:09:17 pause-613858 crio[2125]: time="2025-10-25T09:09:17.313988652Z" level=info msg="No systemd watchdog enabled"
	Oct 25 09:09:17 pause-613858 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0fa72110f5f6b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   0                   f0eb798730c40       coredns-66bc5c9577-5mvjc               kube-system
	83593fd8cdbbf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   28 seconds ago      Running             kube-proxy                0                   ad62f6a61a17b       kube-proxy-4n9sk                       kube-system
	b0cfe0834184d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   28 seconds ago      Running             kindnet-cni               0                   1d6a6432cc1cc       kindnet-vcf92                          kube-system
	c7886d58cffc0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   39 seconds ago      Running             etcd                      0                   095ba79f2af32       etcd-pause-613858                      kube-system
	7bacff63c1379       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   39 seconds ago      Running             kube-controller-manager   0                   cbbc2d7d8552b       kube-controller-manager-pause-613858   kube-system
	e5cf81d0eb29a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   39 seconds ago      Running             kube-apiserver            0                   7251f06ff8ebc       kube-apiserver-pause-613858            kube-system
	8b7f648a973a1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   39 seconds ago      Running             kube-scheduler            0                   4c5017a8469c1       kube-scheduler-pause-613858            kube-system
	
	
	==> coredns [0fa72110f5f6b461764369463c4269ad3d2ccf4ec126b5d00e00d5176ba43d08] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60887 - 27879 "HINFO IN 7151933186248109070.5145578864095086868. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091127584s
	
	
	==> describe nodes <==
	Name:               pause-613858
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-613858
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=pause-613858
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_08_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:08:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-613858
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:09:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:09:11 +0000   Sat, 25 Oct 2025 09:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-613858
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f342c96d-bde0-473c-872a-568e575eb88b
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5mvjc                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-613858                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-vcf92                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-613858             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-613858    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-4n9sk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-613858             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-613858 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-613858 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-613858 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node pause-613858 event: Registered Node pause-613858 in Controller
	  Normal  NodeReady                18s   kubelet          Node pause-613858 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [c7886d58cffc0a0ecd272461f1ea32ee36900dc767b2a938e56c9dd72bf6c45c] <==
	{"level":"warn","ts":"2025-10-25T09:08:51.234178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.244017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.251033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.258071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.265072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.273054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.280765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.288760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.297302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.308854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.316186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.323900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.331083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.339777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.347678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.355686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.363509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.369961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.376085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.384058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.399907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.404247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.412054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.419556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:08:51.473711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54414","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:29 up 51 min,  0 user,  load average: 6.01, 2.58, 1.58
	Linux pause-613858 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b0cfe0834184d1e241169faa4b377eb51017495feda9e43ce723b74b50175435] <==
	I1025 09:09:00.803216       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:09:00.803451       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:09:00.803618       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:09:00.803687       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:09:00.803712       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:09:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:09:01.103918       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:09:01.103960       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:09:01.103974       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:09:01.104120       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:09:01.504216       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:09:01.504239       1 metrics.go:72] Registering metrics
	I1025 09:09:01.504301       1 controller.go:711] "Syncing nftables rules"
	I1025 09:09:11.007256       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:09:11.007338       1 main.go:301] handling current node
	I1025 09:09:21.013736       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:09:21.013769       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5cf81d0eb29ab58e21cabeccffa5d5469ee3b5aacfbd1d6280da96db059eb2a] <==
	I1025 09:08:52.267675       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:08:52.267714       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 09:08:52.294604       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:08:52.295342       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:08:52.299101       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:08:52.303460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:08:52.314733       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:08:52.324109       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:08:53.152127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:08:53.157291       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:08:53.157316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:08:54.005585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:08:54.062772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:08:54.149520       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:08:54.158753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:08:54.160338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:08:54.165892       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:08:54.231402       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:08:55.206170       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:08:55.221899       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:08:55.232108       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:08:59.337303       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:08:59.343130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:09:00.083260       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:09:00.183523       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7bacff63c1379282854265e0ac2d1d15217ce38911df760f4ce89456b6c21b75] <==
	I1025 09:08:59.254615       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:08:59.258144       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:08:59.261506       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-613858" podCIDRs=["10.244.0.0/24"]
	I1025 09:08:59.264507       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:08:59.267847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:08:59.279927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:08:59.279950       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:08:59.279958       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:08:59.280067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:08:59.280334       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:08:59.280601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:08:59.280621       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:08:59.280655       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:08:59.280819       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:08:59.280947       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:08:59.282039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:08:59.282070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:08:59.283328       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:08:59.283350       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:08:59.283330       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:08:59.285710       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:08:59.285857       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:08:59.285941       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-613858"
	I1025 09:08:59.285997       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:09:14.288124       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [83593fd8cdbbfd2476b535cc8cf1fd2c51d1c5678c0f973463f66b6f3d3bc667] <==
	I1025 09:09:00.615550       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:09:00.678847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:09:00.779338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:09:00.779368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:09:00.779472       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:09:00.800494       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:09:00.800547       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:09:00.806975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:09:00.807420       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:09:00.807508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:09:00.809333       1 config.go:200] "Starting service config controller"
	I1025 09:09:00.809354       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:09:00.809378       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:09:00.809383       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:09:00.809421       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:09:00.809436       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:09:00.809668       1 config.go:309] "Starting node config controller"
	I1025 09:09:00.809678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:09:00.809686       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:09:00.909540       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:09:00.909680       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:09:00.909711       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8b7f648a973a1ff61e663028b204c0704fee7991a50321618b6fc19b83936f4d] <==
	E1025 09:08:52.320539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:08:52.320637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:08:52.321100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:08:52.321357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:08:52.322456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:08:52.323467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:08:52.331264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:08:52.332199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:08:52.335624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:08:53.142911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:08:53.166276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:08:53.192290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:08:53.201137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:08:53.245368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:08:53.375157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:08:53.376869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:08:53.376879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:08:53.441003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:08:53.485595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:08:53.488848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:08:53.556835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:08:53.633828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:08:53.636809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:08:53.707161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1025 09:08:55.304629       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:08:56 pause-613858 kubelet[1286]: I1025 09:08:56.212949    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-613858" podStartSLOduration=1.212941123 podStartE2EDuration="1.212941123s" podCreationTimestamp="2025-10-25 09:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:08:56.200794018 +0000 UTC m=+1.210551909" watchObservedRunningTime="2025-10-25 09:08:56.212941123 +0000 UTC m=+1.222699053"
	Oct 25 09:08:59 pause-613858 kubelet[1286]: I1025 09:08:59.272048    1286 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:08:59 pause-613858 kubelet[1286]: I1025 09:08:59.272771    1286 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245538    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f26c494-986a-4ddb-96eb-342dac616a0c-lib-modules\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245586    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f70706f6-0293-43f0-b6ab-68ee5a45051b-kube-proxy\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245609    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f70706f6-0293-43f0-b6ab-68ee5a45051b-lib-modules\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245635    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f26c494-986a-4ddb-96eb-342dac616a0c-xtables-lock\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245677    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f70706f6-0293-43f0-b6ab-68ee5a45051b-xtables-lock\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245700    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88ns\" (UniqueName: \"kubernetes.io/projected/9f26c494-986a-4ddb-96eb-342dac616a0c-kube-api-access-q88ns\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245723    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9zf\" (UniqueName: \"kubernetes.io/projected/f70706f6-0293-43f0-b6ab-68ee5a45051b-kube-api-access-zq9zf\") pod \"kube-proxy-4n9sk\" (UID: \"f70706f6-0293-43f0-b6ab-68ee5a45051b\") " pod="kube-system/kube-proxy-4n9sk"
	Oct 25 09:09:00 pause-613858 kubelet[1286]: I1025 09:09:00.245750    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9f26c494-986a-4ddb-96eb-342dac616a0c-cni-cfg\") pod \"kindnet-vcf92\" (UID: \"9f26c494-986a-4ddb-96eb-342dac616a0c\") " pod="kube-system/kindnet-vcf92"
	Oct 25 09:09:01 pause-613858 kubelet[1286]: I1025 09:09:01.184003    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vcf92" podStartSLOduration=1.183979767 podStartE2EDuration="1.183979767s" podCreationTimestamp="2025-10-25 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:09:01.183855661 +0000 UTC m=+6.193613557" watchObservedRunningTime="2025-10-25 09:09:01.183979767 +0000 UTC m=+6.193737662"
	Oct 25 09:09:01 pause-613858 kubelet[1286]: I1025 09:09:01.192485    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4n9sk" podStartSLOduration=1.192467137 podStartE2EDuration="1.192467137s" podCreationTimestamp="2025-10-25 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:09:01.192314993 +0000 UTC m=+6.202072889" watchObservedRunningTime="2025-10-25 09:09:01.192467137 +0000 UTC m=+6.202225032"
	Oct 25 09:09:11 pause-613858 kubelet[1286]: I1025 09:09:11.121607    1286 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:09:11 pause-613858 kubelet[1286]: I1025 09:09:11.226605    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljvc\" (UniqueName: \"kubernetes.io/projected/ab423220-2a94-4f0c-9626-5f3151f00e87-kube-api-access-jljvc\") pod \"coredns-66bc5c9577-5mvjc\" (UID: \"ab423220-2a94-4f0c-9626-5f3151f00e87\") " pod="kube-system/coredns-66bc5c9577-5mvjc"
	Oct 25 09:09:11 pause-613858 kubelet[1286]: I1025 09:09:11.226667    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab423220-2a94-4f0c-9626-5f3151f00e87-config-volume\") pod \"coredns-66bc5c9577-5mvjc\" (UID: \"ab423220-2a94-4f0c-9626-5f3151f00e87\") " pod="kube-system/coredns-66bc5c9577-5mvjc"
	Oct 25 09:09:12 pause-613858 kubelet[1286]: I1025 09:09:12.208740    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5mvjc" podStartSLOduration=12.208717674 podStartE2EDuration="12.208717674s" podCreationTimestamp="2025-10-25 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:09:12.20840202 +0000 UTC m=+17.218159915" watchObservedRunningTime="2025-10-25 09:09:12.208717674 +0000 UTC m=+17.218475569"
	Oct 25 09:09:17 pause-613858 kubelet[1286]: W1025 09:09:17.208016    1286 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 25 09:09:17 pause-613858 kubelet[1286]: E1025 09:09:17.208094    1286 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 25 09:09:17 pause-613858 kubelet[1286]: E1025 09:09:17.208138    1286 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 09:09:17 pause-613858 kubelet[1286]: E1025 09:09:17.208154    1286 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 09:09:20 pause-613858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:09:20 pause-613858 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:09:20 pause-613858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:09:20 pause-613858 systemd[1]: kubelet.service: Consumed 1.157s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-613858 -n pause-613858
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-613858 -n pause-613858: exit status 2 (388.740301ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-613858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.102071ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:11:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-959110 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-959110 describe deploy/metrics-server -n kube-system: exit status 1 (58.51478ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-959110 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-959110
helpers_test.go:243: (dbg) docker inspect old-k8s-version-959110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e",
	        "Created": "2025-10-25T09:10:32.791597968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222887,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:10:32.848284943Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/hostname",
	        "HostsPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/hosts",
	        "LogPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e-json.log",
	        "Name": "/old-k8s-version-959110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-959110:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-959110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e",
	                "LowerDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-959110",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-959110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-959110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-959110",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-959110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1756460bc537d1df930e57c71898fe042282a2ea4ac87508a982c8df44b54477",
	            "SandboxKey": "/var/run/docker/netns/1756460bc537",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-959110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:f8:33:d5:de:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58b5fad6c4ae7f65feaa543d9f157207a68afa3f5da4e8c5604314ac776b104d",
	                    "EndpointID": "3dba8a6037fd972d3d512c38cffdddf7ac977e896544c09f7d5a6a1dd71df4b8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-959110",
	                        "e80032bb8f45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-959110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-959110 logs -n 25: (1.108114439s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-687131 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-687131             │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p cilium-687131 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-687131             │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p cilium-687131 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-687131             │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ ssh     │ -p cilium-687131 sudo crio config                                                                                                                                                                                                             │ cilium-687131             │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ start   │ -p running-upgrade-462303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-462303    │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p cilium-687131                                                                                                                                                                                                                              │ cilium-687131             │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p force-systemd-env-423026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-423026  │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ force-systemd-flag-742570 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-742570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ delete  │ -p force-systemd-flag-742570                                                                                                                                                                                                                  │ force-systemd-flag-742570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ ssh     │ -p NoKubernetes-629442 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-629442       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ delete  │ -p NoKubernetes-629442                                                                                                                                                                                                                        │ NoKubernetes-629442       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-851718    │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p cert-options-077936 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-423026                                                                                                                                                                                                                   │ force-systemd-env-423026  │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p running-upgrade-462303                                                                                                                                                                                                                     │ running-upgrade-462303    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-047620    │ jenkins │ v1.32.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ cert-options-077936 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ -p cert-options-077936 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p cert-options-077936                                                                                                                                                                                                                        │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:10:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:10:52.892379  227228 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:10:52.892657  227228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:10:52.892669  227228 out.go:374] Setting ErrFile to fd 2...
	I1025 09:10:52.892676  227228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:10:52.892929  227228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:10:52.893509  227228 out.go:368] Setting JSON to false
	I1025 09:10:52.895011  227228 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3201,"bootTime":1761380252,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:10:52.895118  227228 start.go:141] virtualization: kvm guest
	I1025 09:10:52.899885  227228 out.go:179] * [missing-upgrade-047620] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:10:52.901229  227228 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:10:52.901224  227228 notify.go:220] Checking for updates...
	I1025 09:10:52.904213  227228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:10:52.905456  227228 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:10:52.906657  227228 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:10:52.907838  227228 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:10:52.909196  227228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:10:52.910813  227228 config.go:182] Loaded profile config "missing-upgrade-047620": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 09:10:52.912702  227228 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 09:10:52.913891  227228 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:10:52.941415  227228 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:10:52.941513  227228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:10:53.016034  227228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 09:10:53.004226109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:10:53.016170  227228 docker.go:318] overlay module found
	I1025 09:10:53.018151  227228 out.go:179] * Using the docker driver based on existing profile
	I1025 09:10:53.019711  227228 start.go:305] selected driver: docker
	I1025 09:10:53.019729  227228 start.go:925] validating driver "docker" against &{Name:missing-upgrade-047620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-047620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:10:53.019811  227228 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:10:53.020384  227228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:10:53.081115  227228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 09:10:53.070540141 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:10:53.081426  227228 cni.go:84] Creating CNI manager for ""
	I1025 09:10:53.081493  227228 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:10:53.081550  227228 start.go:349] cluster config:
	{Name:missing-upgrade-047620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-047620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:10:53.084203  227228 out.go:179] * Starting "missing-upgrade-047620" primary control-plane node in "missing-upgrade-047620" cluster
	I1025 09:10:53.085278  227228 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:10:53.086507  227228 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:10:53.087536  227228 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 09:10:53.087586  227228 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1025 09:10:53.087597  227228 cache.go:58] Caching tarball of preloaded images
	I1025 09:10:53.087683  227228 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1025 09:10:53.087721  227228 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:10:53.087735  227228 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I1025 09:10:53.087855  227228 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/config.json ...
	I1025 09:10:53.110007  227228 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1025 09:10:53.110026  227228 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1025 09:10:53.110045  227228 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:10:53.110080  227228 start.go:360] acquireMachinesLock for missing-upgrade-047620: {Name:mk9218a61680cb858a813eef224d70214635966b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:10:53.110145  227228 start.go:364] duration metric: took 42.667µs to acquireMachinesLock for "missing-upgrade-047620"
	I1025 09:10:53.110167  227228 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:10:53.110176  227228 fix.go:54] fixHost starting: 
	I1025 09:10:53.110453  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:53.130528  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:53.130630  227228 fix.go:112] recreateIfNeeded on missing-upgrade-047620: state= err=unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:53.130792  227228 fix.go:117] machineExists: false. err=machine does not exist
	I1025 09:10:53.132456  227228 out.go:179] * docker "missing-upgrade-047620" container is missing, will recreate.
	I1025 09:10:51.673501  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:52.173793  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:52.673768  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:53.174001  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:53.674319  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:54.173575  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:54.674435  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:55.174594  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:55.673742  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:56.173764  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:52.867711  225660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:10:52.894568  225660 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496 for IP: 192.168.85.2
	I1025 09:10:52.894587  225660 certs.go:195] generating shared ca certs ...
	I1025 09:10:52.894606  225660 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:10:52.894773  225660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:10:52.894837  225660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:10:52.894851  225660 certs.go:257] generating profile certs ...
	I1025 09:10:52.894970  225660 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/client.key
	I1025 09:10:52.895049  225660 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/apiserver.key.1aa769e4
	I1025 09:10:52.895105  225660 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/proxy-client.key
	I1025 09:10:52.895252  225660 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:10:52.895293  225660 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:10:52.895307  225660 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:10:52.895341  225660 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:10:52.895377  225660 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:10:52.895400  225660 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:10:52.895442  225660 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:10:52.896075  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:10:52.917389  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:10:52.939719  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:10:52.960866  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:10:52.987130  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1025 09:10:53.013555  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:10:53.034162  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:10:53.057148  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:10:53.078936  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:10:53.098262  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:10:53.118729  225660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:10:53.138530  225660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:10:53.152533  225660 ssh_runner.go:195] Run: openssl version
	I1025 09:10:53.159062  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:10:53.168106  225660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:10:53.172374  225660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:10:53.172431  225660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:10:53.214563  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:10:53.225445  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:10:53.236253  225660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:10:53.240872  225660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:10:53.240940  225660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:10:53.276189  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:10:53.285159  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:10:53.294461  225660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:10:53.298553  225660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:10:53.298651  225660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:10:53.334978  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:10:53.343705  225660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:10:53.347558  225660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:10:53.382522  225660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:10:53.416992  225660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:10:53.458180  225660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:10:53.494508  225660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:10:53.530824  225660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:10:53.567068  225660 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-497496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-497496 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:10:53.567136  225660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:10:53.567197  225660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:10:53.598897  225660 cri.go:89] found id: ""
	I1025 09:10:53.598991  225660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:10:53.608038  225660 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:10:53.608058  225660 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:10:53.608102  225660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:10:53.616088  225660 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:10:53.616784  225660 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-497496" does not appear in /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:10:53.617178  225660 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-5966/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-497496" cluster setting kubeconfig missing "kubernetes-upgrade-497496" context setting]
	I1025 09:10:53.617788  225660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:10:53.658175  225660 kapi.go:59] client config for kubernetes-upgrade-497496: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/client.key", CAFile:"/home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:10:53.658615  225660 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:10:53.658631  225660 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:10:53.658636  225660 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:10:53.658655  225660 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:10:53.658659  225660 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:10:53.658998  225660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:10:53.667856  225660 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-25 09:10:28.867455660 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-25 09:10:52.764962164 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-497496"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.34.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1025 09:10:53.667872  225660 kubeadm.go:1160] stopping kube-system containers ...
	I1025 09:10:53.667883  225660 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 09:10:53.667929  225660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:10:53.698716  225660 cri.go:89] found id: ""
	I1025 09:10:53.698783  225660 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 09:10:53.740438  225660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:10:53.749926  225660 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 25 09:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 25 09:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Oct 25 09:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 25 09:10 /etc/kubernetes/scheduler.conf
	
	I1025 09:10:53.750000  225660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:10:53.758924  225660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:10:53.767122  225660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:10:53.775849  225660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:10:53.775921  225660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:10:53.784273  225660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:10:53.794358  225660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:10:53.794424  225660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:10:53.803124  225660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:10:53.863107  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:10:53.908338  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:10:54.635424  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:10:54.824413  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:10:54.875841  225660 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:10:54.927856  225660 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:10:54.927933  225660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:10:55.428582  225660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:10:55.443778  225660 api_server.go:72] duration metric: took 515.927927ms to wait for apiserver process to appear ...
	I1025 09:10:55.443807  225660 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:10:55.443831  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:10:55.444216  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:10:55.944773  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:10:53.133745  227228 delete.go:124] DEMOLISHING missing-upgrade-047620 ...
	I1025 09:10:53.133819  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:53.151933  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	W1025 09:10:53.151997  227228 stop.go:83] unable to get state: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:53.152035  227228 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:53.152370  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:53.169932  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:53.170000  227228 delete.go:82] Unable to get host status for missing-upgrade-047620, assuming it has already been deleted: state: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:53.170064  227228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-047620
	W1025 09:10:53.189544  227228 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-047620 returned with exit code 1
	I1025 09:10:53.189576  227228 kic.go:371] could not find the container missing-upgrade-047620 to remove it. will try anyways
	I1025 09:10:53.189618  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:53.208857  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	W1025 09:10:53.208927  227228 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:53.208981  227228 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-047620 /bin/bash -c "sudo init 0"
	W1025 09:10:53.228117  227228 cli_runner.go:211] docker exec --privileged -t missing-upgrade-047620 /bin/bash -c "sudo init 0" returned with exit code 1
	I1025 09:10:53.228154  227228 oci.go:659] error shutdown missing-upgrade-047620: docker exec --privileged -t missing-upgrade-047620 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:54.228868  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:54.249415  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:54.249476  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:54.249510  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:10:54.249557  227228 retry.go:31] will retry after 255.402866ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:54.505852  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:54.524064  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:54.524124  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:54.524139  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:10:54.524170  227228 retry.go:31] will retry after 749.19721ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:55.274015  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:55.291161  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:55.291239  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:55.291260  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:10:55.291303  227228 retry.go:31] will retry after 902.842628ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:56.194376  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:56.215248  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:56.215320  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:56.215336  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:10:56.215366  227228 retry.go:31] will retry after 2.37606056s: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:56.674617  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:57.173714  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:57.674473  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:58.173668  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:58.673891  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:59.173599  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:10:59.673671  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:00.173793  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:00.674273  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:01.173656  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:01.674030  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:02.174400  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:02.674281  221171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:11:02.744508  221171 kubeadm.go:1113] duration metric: took 11.646244126s to wait for elevateKubeSystemPrivileges
	I1025 09:11:02.744543  221171 kubeadm.go:402] duration metric: took 21.627584243s to StartCluster
	I1025 09:11:02.744561  221171 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:02.744630  221171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:11:02.745931  221171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:02.746154  221171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:11:02.746154  221171 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:11:02.746195  221171 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:11:02.746312  221171 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-959110"
	I1025 09:11:02.746340  221171 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-959110"
	I1025 09:11:02.746292  221171 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-959110"
	I1025 09:11:02.746367  221171 config.go:182] Loaded profile config "old-k8s-version-959110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:11:02.746415  221171 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-959110"
	I1025 09:11:02.746481  221171 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:11:02.746726  221171 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:02.746982  221171 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:02.748135  221171 out.go:179] * Verifying Kubernetes components...
	I1025 09:11:02.750504  221171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:02.771653  221171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:11:00.946736  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:11:00.946785  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:10:58.591743  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:10:58.609770  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:10:58.609829  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:10:58.609838  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:10:58.609866  227228 retry.go:31] will retry after 3.577433322s: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:11:02.187759  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:11:02.213681  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:11:02.213750  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:11:02.213764  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:11:02.213808  227228 retry.go:31] will retry after 5.51273392s: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:11:02.772161  221171 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-959110"
	I1025 09:11:02.772205  221171 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:11:02.772698  221171 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:02.773099  221171 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:11:02.773118  221171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:11:02.773171  221171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:02.799092  221171 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:11:02.799122  221171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:11:02.799188  221171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:02.806629  221171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:02.825676  221171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:02.853283  221171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:11:02.907784  221171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:11:02.928701  221171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:11:02.944525  221171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:11:03.090040  221171 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 09:11:03.091375  221171 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-959110" to be "Ready" ...
	I1025 09:11:03.312009  221171 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:11:03.313337  221171 addons.go:514] duration metric: took 567.142085ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:11:03.595364  221171 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-959110" context rescaled to 1 replicas
	W1025 09:11:05.095190  221171 node_ready.go:57] node "old-k8s-version-959110" has "Ready":"False" status (will retry)
	I1025 09:11:05.949082  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:11:05.949124  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:07.729068  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:11:07.747742  227228 cli_runner.go:211] docker container inspect missing-upgrade-047620 --format={{.State.Status}} returned with exit code 1
	I1025 09:11:07.747796  227228 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	I1025 09:11:07.747803  227228 oci.go:673] temporary error: container missing-upgrade-047620 status is  but expect it to be exited
	I1025 09:11:07.747836  227228 oci.go:88] couldn't shut down missing-upgrade-047620 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-047620": docker container inspect missing-upgrade-047620 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-047620
	 
	I1025 09:11:07.747893  227228 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-047620
	I1025 09:11:07.765018  227228 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-047620
	W1025 09:11:07.782343  227228 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-047620 returned with exit code 1
	I1025 09:11:07.782424  227228 cli_runner.go:164] Run: docker network inspect missing-upgrade-047620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:11:07.800103  227228 cli_runner.go:164] Run: docker network rm missing-upgrade-047620
	I1025 09:11:07.989252  227228 fix.go:124] Sleeping 1 second for extra luck!
	I1025 09:11:08.989350  227228 start.go:125] createHost starting for "" (driver="docker")
	W1025 09:11:07.594179  221171 node_ready.go:57] node "old-k8s-version-959110" has "Ready":"False" status (will retry)
	W1025 09:11:09.595116  221171 node_ready.go:57] node "old-k8s-version-959110" has "Ready":"False" status (will retry)
	W1025 09:11:11.595735  221171 node_ready.go:57] node "old-k8s-version-959110" has "Ready":"False" status (will retry)
	I1025 09:11:10.950114  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:11:10.950156  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:08.991522  227228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:11:08.991675  227228 start.go:159] libmachine.API.Create for "missing-upgrade-047620" (driver="docker")
	I1025 09:11:08.991709  227228 client.go:168] LocalClient.Create starting
	I1025 09:11:08.991806  227228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:11:08.991840  227228 main.go:141] libmachine: Decoding PEM data...
	I1025 09:11:08.991855  227228 main.go:141] libmachine: Parsing certificate...
	I1025 09:11:08.991908  227228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:11:08.991928  227228 main.go:141] libmachine: Decoding PEM data...
	I1025 09:11:08.991935  227228 main.go:141] libmachine: Parsing certificate...
	I1025 09:11:08.992146  227228 cli_runner.go:164] Run: docker network inspect missing-upgrade-047620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:11:09.009522  227228 cli_runner.go:211] docker network inspect missing-upgrade-047620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:11:09.009584  227228 network_create.go:284] running [docker network inspect missing-upgrade-047620] to gather additional debugging logs...
	I1025 09:11:09.009606  227228 cli_runner.go:164] Run: docker network inspect missing-upgrade-047620
	W1025 09:11:09.027235  227228 cli_runner.go:211] docker network inspect missing-upgrade-047620 returned with exit code 1
	I1025 09:11:09.027266  227228 network_create.go:287] error running [docker network inspect missing-upgrade-047620]: docker network inspect missing-upgrade-047620: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-047620 not found
	I1025 09:11:09.027283  227228 network_create.go:289] output of [docker network inspect missing-upgrade-047620]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-047620 not found
	
	** /stderr **
	I1025 09:11:09.027376  227228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:11:09.045694  227228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:11:09.046395  227228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:11:09.047047  227228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:11:09.047315  227228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e91ae20fd62c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:4f:77:0c:60:76} reservation:<nil>}
	I1025 09:11:09.047768  227228 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9aa42478a513 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:4e:f8:f5:5b:2e} reservation:<nil>}
	I1025 09:11:09.048251  227228 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-58b5fad6c4ae IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:d6:e5:9b:77:a1:68} reservation:<nil>}
	I1025 09:11:09.049018  227228 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00200c780}
	I1025 09:11:09.049042  227228 network_create.go:124] attempt to create docker network missing-upgrade-047620 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:11:09.049091  227228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-047620 missing-upgrade-047620
	I1025 09:11:09.108676  227228 network_create.go:108] docker network missing-upgrade-047620 192.168.103.0/24 created
	I1025 09:11:09.108702  227228 kic.go:121] calculated static IP "192.168.103.2" for the "missing-upgrade-047620" container
	I1025 09:11:09.108766  227228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:11:09.126664  227228 cli_runner.go:164] Run: docker volume create missing-upgrade-047620 --label name.minikube.sigs.k8s.io=missing-upgrade-047620 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:11:09.143812  227228 oci.go:103] Successfully created a docker volume missing-upgrade-047620
	I1025 09:11:09.143906  227228 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-047620-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-047620 --entrypoint /usr/bin/test -v missing-upgrade-047620:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1025 09:11:09.456365  227228 oci.go:107] Successfully prepared a docker volume missing-upgrade-047620
	I1025 09:11:09.456410  227228 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 09:11:09.456435  227228 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:11:09.456517  227228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-047620:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:11:13.670503  221171 node_ready.go:57] node "old-k8s-version-959110" has "Ready":"False" status (will retry)
	I1025 09:11:15.095187  221171 node_ready.go:49] node "old-k8s-version-959110" is "Ready"
	I1025 09:11:15.095219  221171 node_ready.go:38] duration metric: took 12.003796495s for node "old-k8s-version-959110" to be "Ready" ...
	I1025 09:11:15.095236  221171 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:11:15.095287  221171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:11:15.109552  221171 api_server.go:72] duration metric: took 12.363280884s to wait for apiserver process to appear ...
	I1025 09:11:15.109582  221171 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:11:15.109604  221171 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 09:11:15.113894  221171 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 09:11:15.115547  221171 api_server.go:141] control plane version: v1.28.0
	I1025 09:11:15.115577  221171 api_server.go:131] duration metric: took 5.987484ms to wait for apiserver health ...
	I1025 09:11:15.115589  221171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:11:15.120813  221171 system_pods.go:59] 8 kube-system pods found
	I1025 09:11:15.120869  221171 system_pods.go:61] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:11:15.120880  221171 system_pods.go:61] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running
	I1025 09:11:15.120887  221171 system_pods.go:61] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running
	I1025 09:11:15.120893  221171 system_pods.go:61] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running
	I1025 09:11:15.120899  221171 system_pods.go:61] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running
	I1025 09:11:15.120909  221171 system_pods.go:61] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running
	I1025 09:11:15.120914  221171 system_pods.go:61] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running
	I1025 09:11:15.120935  221171 system_pods.go:61] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:11:15.120947  221171 system_pods.go:74] duration metric: took 5.35092ms to wait for pod list to return data ...
	I1025 09:11:15.120959  221171 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:11:15.123679  221171 default_sa.go:45] found service account: "default"
	I1025 09:11:15.123702  221171 default_sa.go:55] duration metric: took 2.737458ms for default service account to be created ...
	I1025 09:11:15.123713  221171 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:11:15.127312  221171 system_pods.go:86] 8 kube-system pods found
	I1025 09:11:15.127340  221171 system_pods.go:89] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:11:15.127345  221171 system_pods.go:89] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running
	I1025 09:11:15.127352  221171 system_pods.go:89] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running
	I1025 09:11:15.127356  221171 system_pods.go:89] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running
	I1025 09:11:15.127360  221171 system_pods.go:89] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running
	I1025 09:11:15.127363  221171 system_pods.go:89] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running
	I1025 09:11:15.127368  221171 system_pods.go:89] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running
	I1025 09:11:15.127378  221171 system_pods.go:89] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:11:15.127408  221171 retry.go:31] will retry after 250.038669ms: missing components: kube-dns
	I1025 09:11:15.381403  221171 system_pods.go:86] 8 kube-system pods found
	I1025 09:11:15.381437  221171 system_pods.go:89] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:11:15.381444  221171 system_pods.go:89] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running
	I1025 09:11:15.381452  221171 system_pods.go:89] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running
	I1025 09:11:15.381458  221171 system_pods.go:89] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running
	I1025 09:11:15.381463  221171 system_pods.go:89] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running
	I1025 09:11:15.381470  221171 system_pods.go:89] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running
	I1025 09:11:15.381476  221171 system_pods.go:89] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running
	I1025 09:11:15.381484  221171 system_pods.go:89] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:11:15.381503  221171 retry.go:31] will retry after 305.258816ms: missing components: kube-dns
	I1025 09:11:15.691525  221171 system_pods.go:86] 8 kube-system pods found
	I1025 09:11:15.691565  221171 system_pods.go:89] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:11:15.691573  221171 system_pods.go:89] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running
	I1025 09:11:15.691580  221171 system_pods.go:89] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running
	I1025 09:11:15.691586  221171 system_pods.go:89] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running
	I1025 09:11:15.691592  221171 system_pods.go:89] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running
	I1025 09:11:15.691597  221171 system_pods.go:89] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running
	I1025 09:11:15.691603  221171 system_pods.go:89] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running
	I1025 09:11:15.691610  221171 system_pods.go:89] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:11:15.691628  221171 retry.go:31] will retry after 480.384992ms: missing components: kube-dns
	I1025 09:11:16.175922  221171 system_pods.go:86] 8 kube-system pods found
	I1025 09:11:16.175950  221171 system_pods.go:89] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Running
	I1025 09:11:16.175956  221171 system_pods.go:89] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running
	I1025 09:11:16.175960  221171 system_pods.go:89] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running
	I1025 09:11:16.175964  221171 system_pods.go:89] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running
	I1025 09:11:16.175968  221171 system_pods.go:89] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running
	I1025 09:11:16.175973  221171 system_pods.go:89] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running
	I1025 09:11:16.175981  221171 system_pods.go:89] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running
	I1025 09:11:16.175984  221171 system_pods.go:89] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Running
	I1025 09:11:16.175992  221171 system_pods.go:126] duration metric: took 1.052272781s to wait for k8s-apps to be running ...
	I1025 09:11:16.176003  221171 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:11:16.176045  221171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:11:16.189587  221171 system_svc.go:56] duration metric: took 13.572872ms WaitForService to wait for kubelet
	I1025 09:11:16.189617  221171 kubeadm.go:586] duration metric: took 13.443352623s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:11:16.189636  221171 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:11:16.192419  221171 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:11:16.192442  221171 node_conditions.go:123] node cpu capacity is 8
	I1025 09:11:16.192455  221171 node_conditions.go:105] duration metric: took 2.803004ms to run NodePressure ...
	I1025 09:11:16.192467  221171 start.go:241] waiting for startup goroutines ...
	I1025 09:11:16.192476  221171 start.go:246] waiting for cluster config update ...
	I1025 09:11:16.192487  221171 start.go:255] writing updated cluster config ...
	I1025 09:11:16.192799  221171 ssh_runner.go:195] Run: rm -f paused
	I1025 09:11:16.197016  221171 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:11:16.201337  221171 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wm9rk" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.206296  221171 pod_ready.go:94] pod "coredns-5dd5756b68-wm9rk" is "Ready"
	I1025 09:11:16.206345  221171 pod_ready.go:86] duration metric: took 4.980086ms for pod "coredns-5dd5756b68-wm9rk" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.209192  221171 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.213480  221171 pod_ready.go:94] pod "etcd-old-k8s-version-959110" is "Ready"
	I1025 09:11:16.213508  221171 pod_ready.go:86] duration metric: took 4.29635ms for pod "etcd-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.216358  221171 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.220550  221171 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-959110" is "Ready"
	I1025 09:11:16.220575  221171 pod_ready.go:86] duration metric: took 4.193082ms for pod "kube-apiserver-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.223206  221171 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:15.837612  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:33762->192.168.85.2:8443: read: connection reset by peer
	I1025 09:11:15.837692  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:15.838088  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:15.944388  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:15.944862  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:16.444576  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:16.601710  221171 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-959110" is "Ready"
	I1025 09:11:16.601734  221171 pod_ready.go:86] duration metric: took 378.509263ms for pod "kube-controller-manager-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:16.801538  221171 pod_ready.go:83] waiting for pod "kube-proxy-zrfv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:17.201556  221171 pod_ready.go:94] pod "kube-proxy-zrfv4" is "Ready"
	I1025 09:11:17.201586  221171 pod_ready.go:86] duration metric: took 400.021952ms for pod "kube-proxy-zrfv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:17.402314  221171 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:17.800851  221171 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-959110" is "Ready"
	I1025 09:11:17.800882  221171 pod_ready.go:86] duration metric: took 398.541665ms for pod "kube-scheduler-old-k8s-version-959110" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:11:17.800896  221171 pod_ready.go:40] duration metric: took 1.60384219s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:11:17.847565  221171 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 09:11:17.849496  221171 out.go:203] 
	W1025 09:11:17.850892  221171 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 09:11:17.852257  221171 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:11:17.853934  221171 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-959110" cluster and "default" namespace by default
	I1025 09:11:14.511741  227228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-047620:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.05517643s)
	I1025 09:11:14.511775  227228 kic.go:203] duration metric: took 5.055336075s to extract preloaded images to volume ...
	W1025 09:11:14.511869  227228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:11:14.511908  227228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:11:14.511943  227228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:11:14.570984  227228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-047620 --name missing-upgrade-047620 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-047620 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-047620 --network missing-upgrade-047620 --ip 192.168.103.2 --volume missing-upgrade-047620:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1025 09:11:14.864378  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Running}}
	I1025 09:11:14.888069  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	I1025 09:11:14.912099  227228 cli_runner.go:164] Run: docker exec missing-upgrade-047620 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:11:14.966368  227228 oci.go:144] the created container "missing-upgrade-047620" has a running status.
	I1025 09:11:14.966414  227228 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa...
	I1025 09:11:15.024509  227228 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:11:15.053292  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	I1025 09:11:15.070859  227228 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:11:15.070885  227228 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-047620 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:11:15.124373  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	I1025 09:11:15.146080  227228 machine.go:93] provisionDockerMachine start ...
	I1025 09:11:15.146373  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:15.173341  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:15.173730  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:15.173750  227228 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:11:15.174784  227228 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:11:18.292676  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-047620
	
	I1025 09:11:18.292712  227228 ubuntu.go:182] provisioning hostname "missing-upgrade-047620"
	I1025 09:11:18.292775  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:18.313271  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:18.313578  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:18.313607  227228 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-047620 && echo "missing-upgrade-047620" | sudo tee /etc/hostname
	I1025 09:11:18.443841  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-047620
	
	I1025 09:11:18.443927  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:18.462573  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:18.462907  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:18.462939  227228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-047620' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-047620/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-047620' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:11:18.579425  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:11:18.579455  227228 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:11:18.579500  227228 ubuntu.go:190] setting up certificates
	I1025 09:11:18.579512  227228 provision.go:84] configureAuth start
	I1025 09:11:18.579588  227228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-047620
	I1025 09:11:18.598986  227228 provision.go:143] copyHostCerts
	I1025 09:11:18.599043  227228 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:11:18.599050  227228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:11:18.599118  227228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:11:18.599214  227228 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:11:18.599222  227228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:11:18.599248  227228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:11:18.599316  227228 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:11:18.599326  227228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:11:18.599365  227228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:11:18.599435  227228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-047620 san=[127.0.0.1 192.168.103.2 localhost minikube missing-upgrade-047620]
	I1025 09:11:18.657912  227228 provision.go:177] copyRemoteCerts
	I1025 09:11:18.657966  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:11:18.658000  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:18.677932  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:18.766743  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:11:18.795833  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 09:11:18.821462  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:11:18.847026  227228 provision.go:87] duration metric: took 267.484658ms to configureAuth
	I1025 09:11:18.847050  227228 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:11:18.847226  227228 config.go:182] Loaded profile config "missing-upgrade-047620": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 09:11:18.847323  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:18.866147  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:18.866388  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:18.866411  227228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:11:19.131201  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:11:19.131225  227228 machine.go:96] duration metric: took 3.985105718s to provisionDockerMachine
	I1025 09:11:19.131234  227228 client.go:171] duration metric: took 10.13951991s to LocalClient.Create
	I1025 09:11:19.131263  227228 start.go:167] duration metric: took 10.139577124s to libmachine.API.Create "missing-upgrade-047620"
	I1025 09:11:19.131270  227228 start.go:293] postStartSetup for "missing-upgrade-047620" (driver="docker")
	I1025 09:11:19.131279  227228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:11:19.131335  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:11:19.131384  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:19.150694  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:19.240020  227228 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:11:19.243752  227228 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:11:19.243779  227228 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 09:11:19.243793  227228 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 09:11:19.243801  227228 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 09:11:19.243812  227228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:11:19.243861  227228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:11:19.243963  227228 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:11:19.244081  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:11:19.253702  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:11:19.282897  227228 start.go:296] duration metric: took 151.616762ms for postStartSetup
	I1025 09:11:19.283273  227228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-047620
	I1025 09:11:19.303025  227228 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/config.json ...
	I1025 09:11:19.303346  227228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:11:19.303395  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:19.321595  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:19.403753  227228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:11:19.408270  227228 start.go:128] duration metric: took 10.418863203s to createHost
	I1025 09:11:19.408357  227228 cli_runner.go:164] Run: docker container inspect missing-upgrade-047620 --format={{.State.Status}}
	W1025 09:11:19.426283  227228 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:11:19.426318  227228 machine.go:93] provisionDockerMachine start ...
	I1025 09:11:19.426403  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:19.444614  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:19.444900  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:19.444917  227228 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:11:19.560739  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-047620
	
	I1025 09:11:19.560772  227228 ubuntu.go:182] provisioning hostname "missing-upgrade-047620"
	I1025 09:11:19.560831  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:19.579356  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:19.579570  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:19.579588  227228 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-047620 && echo "missing-upgrade-047620" | sudo tee /etc/hostname
	I1025 09:11:19.707631  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-047620
	
	I1025 09:11:19.707737  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:19.727180  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:19.727384  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:19.727401  227228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-047620' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-047620/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-047620' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:11:19.845444  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:11:19.845476  227228 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:11:19.845496  227228 ubuntu.go:190] setting up certificates
	I1025 09:11:19.845511  227228 provision.go:84] configureAuth start
	I1025 09:11:19.845573  227228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-047620
	I1025 09:11:19.864765  227228 provision.go:143] copyHostCerts
	I1025 09:11:19.864831  227228 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:11:19.864843  227228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:11:19.864935  227228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:11:19.865052  227228 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:11:19.865066  227228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:11:19.865100  227228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:11:19.865183  227228 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:11:19.865194  227228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:11:19.865223  227228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:11:19.865298  227228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-047620 san=[127.0.0.1 192.168.103.2 localhost minikube missing-upgrade-047620]
	I1025 09:11:19.932917  227228 provision.go:177] copyRemoteCerts
	I1025 09:11:19.932971  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:11:19.933005  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:19.952014  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:20.040735  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:11:20.068981  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 09:11:20.094875  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:11:20.120947  227228 provision.go:87] duration metric: took 275.423609ms to configureAuth
	I1025 09:11:20.120972  227228 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:11:20.121124  227228 config.go:182] Loaded profile config "missing-upgrade-047620": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 09:11:20.121213  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:20.139935  227228 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:20.140313  227228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1025 09:11:20.140345  227228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:11:20.372327  227228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:11:20.372356  227228 machine.go:96] duration metric: took 946.029612ms to provisionDockerMachine
	I1025 09:11:20.372372  227228 start.go:293] postStartSetup for "missing-upgrade-047620" (driver="docker")
	I1025 09:11:20.372386  227228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:11:20.372448  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:11:20.372493  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:20.392131  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:20.480063  227228 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:11:20.483736  227228 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:11:20.483772  227228 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 09:11:20.483783  227228 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 09:11:20.483790  227228 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 09:11:20.483806  227228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:11:20.483883  227228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:11:20.483979  227228 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:11:20.484095  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:11:20.493877  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:11:20.520852  227228 start.go:296] duration metric: took 148.466476ms for postStartSetup
	I1025 09:11:20.520935  227228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:11:20.520987  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:20.539966  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:20.622760  227228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:11:20.627363  227228 fix.go:56] duration metric: took 27.517180629s for fixHost
	I1025 09:11:20.627390  227228 start.go:83] releasing machines lock for "missing-upgrade-047620", held for 27.517231874s
	I1025 09:11:20.627455  227228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-047620
	I1025 09:11:20.646659  227228 ssh_runner.go:195] Run: cat /version.json
	I1025 09:11:20.646688  227228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:11:20.646740  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:20.646780  227228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-047620
	I1025 09:11:20.666976  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	I1025 09:11:20.667867  227228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/missing-upgrade-047620/id_rsa Username:docker}
	W1025 09:11:20.843800  227228 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1025 09:11:20.843915  227228 ssh_runner.go:195] Run: systemctl --version
	I1025 09:11:20.848929  227228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:11:20.990422  227228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 09:11:20.996029  227228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:11:21.019029  227228 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1025 09:11:21.019104  227228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:11:21.051378  227228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 09:11:21.051415  227228 start.go:495] detecting cgroup driver to use...
	I1025 09:11:21.051450  227228 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:11:21.051505  227228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:11:21.068210  227228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:11:21.080878  227228 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:11:21.080943  227228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:11:21.095598  227228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:11:21.111063  227228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:11:21.182026  227228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:11:21.255484  227228 docker.go:234] disabling docker service ...
	I1025 09:11:21.255560  227228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:11:21.274455  227228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:11:21.287340  227228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:11:21.356177  227228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:11:21.529524  227228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:11:21.541828  227228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:11:21.560441  227228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 09:11:21.560509  227228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.574249  227228 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:11:21.574326  227228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.585463  227228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.596234  227228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.606843  227228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:11:21.616596  227228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.626888  227228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.644501  227228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:21.655682  227228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:11:21.664942  227228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:11:21.674509  227228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:21.739049  227228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:11:21.836576  227228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:11:21.836661  227228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:11:21.840682  227228 start.go:563] Will wait 60s for crictl version
	I1025 09:11:21.840749  227228 ssh_runner.go:195] Run: which crictl
	I1025 09:11:21.844511  227228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:11:21.881624  227228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1025 09:11:21.881720  227228 ssh_runner.go:195] Run: crio --version
	I1025 09:11:21.919351  227228 ssh_runner.go:195] Run: crio --version
	I1025 09:11:21.960405  227228 out.go:179] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1025 09:11:21.445767  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:11:21.445828  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:21.961749  227228 cli_runner.go:164] Run: docker network inspect missing-upgrade-047620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:11:21.980931  227228 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:11:21.985244  227228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:11:21.997610  227228 kubeadm.go:883] updating cluster {Name:missing-upgrade-047620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-047620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:11:21.997748  227228 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1025 09:11:21.997815  227228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:11:22.058422  227228 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:11:22.058444  227228 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:11:22.058498  227228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:11:22.094747  227228 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:11:22.094767  227228 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:11:22.094774  227228 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.28.3 crio true true} ...
	I1025 09:11:22.094851  227228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-047620 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-047620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:11:22.094917  227228 ssh_runner.go:195] Run: crio config
	I1025 09:11:22.139730  227228 cni.go:84] Creating CNI manager for ""
	I1025 09:11:22.139769  227228 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:11:22.139787  227228 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:11:22.139814  227228 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-047620 NodeName:missing-upgrade-047620 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:11:22.139975  227228 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-047620"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:11:22.140056  227228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 09:11:22.150346  227228 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:11:22.150410  227228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:11:22.160268  227228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1025 09:11:22.179598  227228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:11:22.201697  227228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1025 09:11:22.221655  227228 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:11:22.225325  227228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:11:22.236848  227228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:22.307448  227228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:11:22.330248  227228 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620 for IP: 192.168.103.2
	I1025 09:11:22.330271  227228 certs.go:195] generating shared ca certs ...
	I1025 09:11:22.330289  227228 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:22.330448  227228 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:11:22.330501  227228 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:11:22.330515  227228 certs.go:257] generating profile certs ...
	I1025 09:11:22.330660  227228 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/client.key
	I1025 09:11:22.330697  227228 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.key.1b5e3e3c
	I1025 09:11:22.330720  227228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.crt.1b5e3e3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:11:22.428361  227228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.crt.1b5e3e3c ...
	I1025 09:11:22.428390  227228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.crt.1b5e3e3c: {Name:mke8bd1fba4fba17121ec0629583bb12eee62120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:22.428599  227228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.key.1b5e3e3c ...
	I1025 09:11:22.428617  227228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.key.1b5e3e3c: {Name:mkd68b560f23793a09f652b7912e07be84454fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:22.428736  227228 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.crt.1b5e3e3c -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.crt
	I1025 09:11:22.428945  227228 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.key.1b5e3e3c -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.key
	I1025 09:11:22.429124  227228 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/proxy-client.key
	I1025 09:11:22.429281  227228 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:11:22.429321  227228 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:11:22.429339  227228 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:11:22.429373  227228 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:11:22.429406  227228 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:11:22.429437  227228 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:11:22.429492  227228 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:11:22.430042  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:11:22.456846  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:11:22.483891  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:11:22.511418  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:11:22.538963  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 09:11:22.565346  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:11:22.591408  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:11:22.617689  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/missing-upgrade-047620/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:11:22.643011  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:11:22.672156  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:11:22.697604  227228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:11:22.724483  227228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:11:22.743839  227228 ssh_runner.go:195] Run: openssl version
	I1025 09:11:22.749528  227228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:11:22.760085  227228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:22.763908  227228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:22.763977  227228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:22.771242  227228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:11:22.782455  227228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:11:22.793610  227228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:11:22.797452  227228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:11:22.797501  227228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:11:22.804487  227228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:11:22.814728  227228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:11:22.824589  227228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:11:22.828326  227228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:11:22.828416  227228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:11:22.835314  227228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:11:22.845496  227228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:11:22.849735  227228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:11:22.856636  227228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:11:22.863431  227228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:11:22.870511  227228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:11:22.877722  227228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:11:22.884918  227228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:11:22.891903  227228 kubeadm.go:400] StartCluster: {Name:missing-upgrade-047620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-047620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:22.891973  227228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:11:22.892018  227228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	
	
	==> CRI-O <==
	Oct 25 09:11:15 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:15.259685144Z" level=info msg="Starting container: d3de810770209de7ec1994a2925f934d2af91c161878da3469d3f664a399d313" id=3e446137-78f5-4088-85f6-0093b19624ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:11:15 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:15.261963175Z" level=info msg="Started container" PID=2133 containerID=d3de810770209de7ec1994a2925f934d2af91c161878da3469d3f664a399d313 description=kube-system/coredns-5dd5756b68-wm9rk/coredns id=3e446137-78f5-4088-85f6-0093b19624ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=705e87ee583edfdb77dee0c2e252d7149613455785144433b7bb902f3197b405
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.312978303Z" level=info msg="Running pod sandbox: default/busybox/POD" id=862bd52e-263b-40c9-8779-f5012358e1ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.313065896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.318192408Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:14f696c31bc95479141ecb0f479947f7c27097041e2cca7ca7a4b49fc033957e UID:2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce NetNS:/var/run/netns/a73a9237-a3f3-4f93-9a88-a73058d5dce3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d04880}] Aliases:map[]}"
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.318221867Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.32790388Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:14f696c31bc95479141ecb0f479947f7c27097041e2cca7ca7a4b49fc033957e UID:2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce NetNS:/var/run/netns/a73a9237-a3f3-4f93-9a88-a73058d5dce3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d04880}] Aliases:map[]}"
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.328036482Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.328804944Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.32959253Z" level=info msg="Ran pod sandbox 14f696c31bc95479141ecb0f479947f7c27097041e2cca7ca7a4b49fc033957e with infra container: default/busybox/POD" id=862bd52e-263b-40c9-8779-f5012358e1ea name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.330869041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d4db8cb8-c12a-4a1b-addd-731b74beb5bd name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.331007585Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d4db8cb8-c12a-4a1b-addd-731b74beb5bd name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.331054797Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d4db8cb8-c12a-4a1b-addd-731b74beb5bd name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.331607817Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=af5ee503-00ee-473f-9f9b-7729732af179 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:11:18 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:18.334747399Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.022102449Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=af5ee503-00ee-473f-9f9b-7729732af179 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.023034639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eacb08c7-38dd-47ac-bcad-efb0e886a256 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.024803025Z" level=info msg="Creating container: default/busybox/busybox" id=3acb9ef5-77e4-4622-9b71-f6a44309b392 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.024953677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.028762651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.029286855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.04782477Z" level=info msg="Created container 08aa0489bf099f0aa4cac3e04cfeed0645e09aa877bf9617f1206ff59ac07e6f: default/busybox/busybox" id=3acb9ef5-77e4-4622-9b71-f6a44309b392 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.048240687Z" level=info msg="Starting container: 08aa0489bf099f0aa4cac3e04cfeed0645e09aa877bf9617f1206ff59ac07e6f" id=1ed192d1-62b1-4dfc-a49a-95f397324522 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:11:19 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:19.050409206Z" level=info msg="Started container" PID=2208 containerID=08aa0489bf099f0aa4cac3e04cfeed0645e09aa877bf9617f1206ff59ac07e6f description=default/busybox/busybox id=1ed192d1-62b1-4dfc-a49a-95f397324522 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14f696c31bc95479141ecb0f479947f7c27097041e2cca7ca7a4b49fc033957e
	Oct 25 09:11:26 old-k8s-version-959110 crio[775]: time="2025-10-25T09:11:26.100584647Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	08aa0489bf099       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   14f696c31bc95       busybox                                          default
	d3de810770209       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   705e87ee583ed       coredns-5dd5756b68-wm9rk                         kube-system
	646cdf2bdb942       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   73822abb84ef6       storage-provisioner                              kube-system
	0bd8cd1e50774       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   260334562e66c       kindnet-gq9q4                                    kube-system
	a22621f1caeec       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   a8978e0a5307b       kube-proxy-zrfv4                                 kube-system
	8d1f6bd611e59       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   be0dbdff02c31       kube-controller-manager-old-k8s-version-959110   kube-system
	38589031ce657       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   3aaf277564555       kube-apiserver-old-k8s-version-959110            kube-system
	7f4f836eb13dc       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   06d873d6a2801       kube-scheduler-old-k8s-version-959110            kube-system
	2e458eaa86ea7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   7bf05fd80599e       etcd-old-k8s-version-959110                      kube-system
	
	
	==> coredns [d3de810770209de7ec1994a2925f934d2af91c161878da3469d3f664a399d313] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41178 - 54215 "HINFO IN 508667208211919043.4100698500415603721. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.072729871s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-959110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-959110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=old-k8s-version-959110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_10_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:10:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-959110
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:11:20 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:11:20 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:11:20 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:11:20 +0000   Sat, 25 Oct 2025 09:11:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-959110
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e79815a7-9819-419a-accf-a6b2fbca5bb9
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-wm9rk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-959110                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-gq9q4                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-959110             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-old-k8s-version-959110    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-zrfv4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-959110             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node old-k8s-version-959110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-959110 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [2e458eaa86ea70a7d2051c8e963cfe5f54158b31b6400e3b3074c38f60ed5950] <==
	{"level":"info","ts":"2025-10-25T09:10:45.438173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:10:45.438229Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:10:45.437955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:10:45.438246Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:10:45.438937Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-10-25T09:10:45.439094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-10-25T09:10:50.610457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.827189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:10:50.610568Z","caller":"traceutil/trace.go:171","msg":"trace[23930399] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:0; response_revision:255; }","duration":"174.990798ms","start":"2025-10-25T09:10:50.435552Z","end":"2025-10-25T09:10:50.610543Z","steps":["trace[23930399] 'range keys from in-memory index tree'  (duration: 174.736412ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:10:50.770479Z","caller":"traceutil/trace.go:171","msg":"trace[441655387] linearizableReadLoop","detail":"{readStateIndex:263; appliedIndex:262; }","duration":"100.726952ms","start":"2025-10-25T09:10:50.669735Z","end":"2025-10-25T09:10:50.770462Z","steps":["trace[441655387] 'read index received'  (duration: 100.552195ms)","trace[441655387] 'applied index is now lower than readState.Index'  (duration: 174.077µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:10:50.770605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.888141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:10:50.770659Z","caller":"traceutil/trace.go:171","msg":"trace[1337717611] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:256; }","duration":"100.938762ms","start":"2025-10-25T09:10:50.669693Z","end":"2025-10-25T09:10:50.770632Z","steps":["trace[1337717611] 'agreement among raft nodes before linearized reading'  (duration: 100.841362ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:10:50.770597Z","caller":"traceutil/trace.go:171","msg":"trace[130720252] transaction","detail":"{read_only:false; response_revision:256; number_of_response:1; }","duration":"153.648899ms","start":"2025-10-25T09:10:50.616924Z","end":"2025-10-25T09:10:50.770573Z","steps":["trace[130720252] 'process raft request'  (duration: 153.414752ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:10:51.439151Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.74743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765720448904314 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-959110\" mod_revision:251 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-959110\" value_size:7179 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-959110\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:10:51.439338Z","caller":"traceutil/trace.go:171","msg":"trace[679844877] linearizableReadLoop","detail":"{readStateIndex:272; appliedIndex:270; }","duration":"265.411995ms","start":"2025-10-25T09:10:51.173913Z","end":"2025-10-25T09:10:51.439325Z","steps":["trace[679844877] 'read index received'  (duration: 157.944824ms)","trace[679844877] 'applied index is now lower than readState.Index'  (duration: 107.466126ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:10:51.439524Z","caller":"traceutil/trace.go:171","msg":"trace[1764317640] transaction","detail":"{read_only:false; response_revision:264; number_of_response:1; }","duration":"266.343992ms","start":"2025-10-25T09:10:51.173168Z","end":"2025-10-25T09:10:51.439512Z","steps":["trace[1764317640] 'process raft request'  (duration: 266.087349ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:10:51.439536Z","caller":"traceutil/trace.go:171","msg":"trace[1521689231] transaction","detail":"{read_only:false; response_revision:263; number_of_response:1; }","duration":"267.663232ms","start":"2025-10-25T09:10:51.171838Z","end":"2025-10-25T09:10:51.439502Z","steps":["trace[1521689231] 'process raft request'  (duration: 160.04794ms)","trace[1521689231] 'compare'  (duration: 106.52533ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:10:51.439595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.685148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-959110\" ","response":"range_response_count:1 size:4952"}
	{"level":"info","ts":"2025-10-25T09:10:51.439625Z","caller":"traceutil/trace.go:171","msg":"trace[1307949045] range","detail":"{range_begin:/registry/minions/old-k8s-version-959110; range_end:; response_count:1; response_revision:264; }","duration":"265.722627ms","start":"2025-10-25T09:10:51.173891Z","end":"2025-10-25T09:10:51.439614Z","steps":["trace[1307949045] 'agreement among raft nodes before linearized reading'  (duration: 265.622164ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:10:51.651709Z","caller":"traceutil/trace.go:171","msg":"trace[1679669214] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"202.882189ms","start":"2025-10-25T09:10:51.44881Z","end":"2025-10-25T09:10:51.651693Z","steps":["trace[1679669214] 'process raft request'  (duration: 198.806484ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:10:51.668578Z","caller":"traceutil/trace.go:171","msg":"trace[1946431721] transaction","detail":"{read_only:false; response_revision:266; number_of_response:1; }","duration":"218.761391ms","start":"2025-10-25T09:10:51.449795Z","end":"2025-10-25T09:10:51.668557Z","steps":["trace[1946431721] 'process raft request'  (duration: 218.552578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:10:51.857716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.079886ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765720448904321 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-959110\" mod_revision:253 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-959110\" value_size:4035 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-959110\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:10:51.857807Z","caller":"traceutil/trace.go:171","msg":"trace[1979040940] linearizableReadLoop","detail":"{readStateIndex:275; appliedIndex:274; }","duration":"114.872ms","start":"2025-10-25T09:10:51.742922Z","end":"2025-10-25T09:10:51.857794Z","steps":["trace[1979040940] 'read index received'  (duration: 9.549875ms)","trace[1979040940] 'applied index is now lower than readState.Index'  (duration: 105.320867ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:10:51.857866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.966221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:10:51.857852Z","caller":"traceutil/trace.go:171","msg":"trace[595051008] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"182.820802ms","start":"2025-10-25T09:10:51.675006Z","end":"2025-10-25T09:10:51.857826Z","steps":["trace[595051008] 'process raft request'  (duration: 77.517578ms)","trace[595051008] 'compare'  (duration: 104.951197ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:10:51.857907Z","caller":"traceutil/trace.go:171","msg":"trace[820625649] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:267; }","duration":"115.001001ms","start":"2025-10-25T09:10:51.742887Z","end":"2025-10-25T09:10:51.857888Z","steps":["trace[820625649] 'agreement among raft nodes before linearized reading'  (duration: 114.941917ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:27 up 53 min,  0 user,  load average: 4.58, 3.52, 2.06
	Linux old-k8s-version-959110 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bd8cd1e507742db12caa6975b455a64e7805abd5a925dc10c5fc49675cc7088] <==
	I1025 09:11:04.470198       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:11:04.470459       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:11:04.470598       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:11:04.470612       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:11:04.470632       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:11:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:11:04.672566       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:11:04.672615       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:11:04.672628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:11:04.672776       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:11:04.973279       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:11:04.973313       1 metrics.go:72] Registering metrics
	I1025 09:11:04.973363       1 controller.go:711] "Syncing nftables rules"
	I1025 09:11:14.677268       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:11:14.677331       1 main.go:301] handling current node
	I1025 09:11:24.674721       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:11:24.674758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [38589031ce6573c917652917c33f0551f95f3370d65f33c75adbf0db09b1e265] <==
	I1025 09:10:46.654474       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:10:46.654486       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:10:46.654498       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:10:46.654504       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:10:46.654516       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:10:46.654554       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:10:46.654562       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:10:46.654603       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:10:46.656299       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:10:46.680982       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:10:47.559533       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:10:47.563043       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:10:47.563066       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:10:48.014122       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:10:48.052277       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:10:48.164047       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:10:48.172200       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1025 09:10:48.173676       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:10:48.179969       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:10:48.589577       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:10:49.864869       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:10:49.881910       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:10:49.899959       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 09:11:02.003633       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 09:11:02.203086       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8d1f6bd611e590d5d3bd304b106ef5cf9eb8ab3d64adf794e2d16d9a5b7fc9c8] <==
	I1025 09:11:01.503081       1 shared_informer.go:318] Caches are synced for cronjob
	I1025 09:11:01.503270       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:11:01.600038       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 09:11:01.969579       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:11:01.978011       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:11:01.978049       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:11:02.007818       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1025 09:11:02.215127       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zrfv4"
	I1025 09:11:02.216773       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gq9q4"
	I1025 09:11:02.456365       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-q95l4"
	I1025 09:11:02.462595       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wm9rk"
	I1025 09:11:02.468306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="460.555682ms"
	I1025 09:11:02.475185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.811395ms"
	I1025 09:11:02.475303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.484µs"
	I1025 09:11:02.476588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.138µs"
	I1025 09:11:03.116188       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 09:11:03.124154       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-q95l4"
	I1025 09:11:03.130340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.216468ms"
	I1025 09:11:03.136007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.611379ms"
	I1025 09:11:03.136106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.89µs"
	I1025 09:11:14.884623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.047µs"
	I1025 09:11:14.907339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.063µs"
	I1025 09:11:16.066930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.089543ms"
	I1025 09:11:16.067044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.898µs"
	I1025 09:11:16.457214       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [a22621f1caeec152a343e8cb74b65853ca6ffd4b4ae27e52131b6825616f4c45] <==
	I1025 09:11:02.613602       1 server_others.go:69] "Using iptables proxy"
	I1025 09:11:02.623947       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1025 09:11:02.642801       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:11:02.645074       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:11:02.645105       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:11:02.645112       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:11:02.645144       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:11:02.645413       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:11:02.645432       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:11:02.646083       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:11:02.646122       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:11:02.646120       1 config.go:315] "Starting node config controller"
	I1025 09:11:02.646138       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:11:02.646325       1 config.go:188] "Starting service config controller"
	I1025 09:11:02.646357       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:11:02.746496       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:11:02.746505       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:11:02.746544       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7f4f836eb13dc2d20de70a81f984d9ed059aab97f4c856177e69c1f84ad4a19c] <==
	W1025 09:10:46.613222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 09:10:46.613252       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 09:10:46.613253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 09:10:46.613260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 09:10:46.613275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 09:10:46.613289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 09:10:46.613321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 09:10:46.613336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 09:10:46.613258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 09:10:46.613369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 09:10:47.575245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 09:10:47.575284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 09:10:47.599973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 09:10:47.600015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 09:10:47.636322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 09:10:47.636369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 09:10:47.642903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 09:10:47.642949       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 09:10:47.700209       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 09:10:47.700245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 09:10:47.739054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 09:10:47.739084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 09:10:47.815476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 09:10:47.815520       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1025 09:10:48.208258       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:11:01 old-k8s-version-959110 kubelet[1388]: I1025 09:11:01.563238    1388 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.221706    1388 topology_manager.go:215] "Topology Admit Handler" podUID="5deb9893-69f7-459d-87c3-30ecc26ca937" podNamespace="kube-system" podName="kube-proxy-zrfv4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.223556    1388 topology_manager.go:215] "Topology Admit Handler" podUID="7ea77cbc-ce8d-488d-8ced-0328e783cba0" podNamespace="kube-system" podName="kindnet-gq9q4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324080    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ea77cbc-ce8d-488d-8ced-0328e783cba0-lib-modules\") pod \"kindnet-gq9q4\" (UID: \"7ea77cbc-ce8d-488d-8ced-0328e783cba0\") " pod="kube-system/kindnet-gq9q4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324129    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5deb9893-69f7-459d-87c3-30ecc26ca937-xtables-lock\") pod \"kube-proxy-zrfv4\" (UID: \"5deb9893-69f7-459d-87c3-30ecc26ca937\") " pod="kube-system/kube-proxy-zrfv4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324162    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5deb9893-69f7-459d-87c3-30ecc26ca937-lib-modules\") pod \"kube-proxy-zrfv4\" (UID: \"5deb9893-69f7-459d-87c3-30ecc26ca937\") " pod="kube-system/kube-proxy-zrfv4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324183    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s29bs\" (UniqueName: \"kubernetes.io/projected/5deb9893-69f7-459d-87c3-30ecc26ca937-kube-api-access-s29bs\") pod \"kube-proxy-zrfv4\" (UID: \"5deb9893-69f7-459d-87c3-30ecc26ca937\") " pod="kube-system/kube-proxy-zrfv4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324202    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9fg\" (UniqueName: \"kubernetes.io/projected/7ea77cbc-ce8d-488d-8ced-0328e783cba0-kube-api-access-zh9fg\") pod \"kindnet-gq9q4\" (UID: \"7ea77cbc-ce8d-488d-8ced-0328e783cba0\") " pod="kube-system/kindnet-gq9q4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324234    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7ea77cbc-ce8d-488d-8ced-0328e783cba0-cni-cfg\") pod \"kindnet-gq9q4\" (UID: \"7ea77cbc-ce8d-488d-8ced-0328e783cba0\") " pod="kube-system/kindnet-gq9q4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324252    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5deb9893-69f7-459d-87c3-30ecc26ca937-kube-proxy\") pod \"kube-proxy-zrfv4\" (UID: \"5deb9893-69f7-459d-87c3-30ecc26ca937\") " pod="kube-system/kube-proxy-zrfv4"
	Oct 25 09:11:02 old-k8s-version-959110 kubelet[1388]: I1025 09:11:02.324269    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ea77cbc-ce8d-488d-8ced-0328e783cba0-xtables-lock\") pod \"kindnet-gq9q4\" (UID: \"7ea77cbc-ce8d-488d-8ced-0328e783cba0\") " pod="kube-system/kindnet-gq9q4"
	Oct 25 09:11:03 old-k8s-version-959110 kubelet[1388]: I1025 09:11:03.023260    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zrfv4" podStartSLOduration=1.023198394 podCreationTimestamp="2025-10-25 09:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:11:03.022882547 +0000 UTC m=+13.193995855" watchObservedRunningTime="2025-10-25 09:11:03.023198394 +0000 UTC m=+13.194311704"
	Oct 25 09:11:05 old-k8s-version-959110 kubelet[1388]: I1025 09:11:05.024930    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gq9q4" podStartSLOduration=1.293226362 podCreationTimestamp="2025-10-25 09:11:02 +0000 UTC" firstStartedPulling="2025-10-25 09:11:02.533282528 +0000 UTC m=+12.704395829" lastFinishedPulling="2025-10-25 09:11:04.264933441 +0000 UTC m=+14.436046746" observedRunningTime="2025-10-25 09:11:05.024580795 +0000 UTC m=+15.195694105" watchObservedRunningTime="2025-10-25 09:11:05.024877279 +0000 UTC m=+15.195990588"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.853615    1388 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.882291    1388 topology_manager.go:215] "Topology Admit Handler" podUID="e3046c99-91ff-4a4f-9bf2-cb82470c9b75" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.884792    1388 topology_manager.go:215] "Topology Admit Handler" podUID="865c21db-7403-433a-b306-c34726b80124" podNamespace="kube-system" podName="coredns-5dd5756b68-wm9rk"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.921083    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2b7l\" (UniqueName: \"kubernetes.io/projected/e3046c99-91ff-4a4f-9bf2-cb82470c9b75-kube-api-access-c2b7l\") pod \"storage-provisioner\" (UID: \"e3046c99-91ff-4a4f-9bf2-cb82470c9b75\") " pod="kube-system/storage-provisioner"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.921133    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e3046c99-91ff-4a4f-9bf2-cb82470c9b75-tmp\") pod \"storage-provisioner\" (UID: \"e3046c99-91ff-4a4f-9bf2-cb82470c9b75\") " pod="kube-system/storage-provisioner"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.921170    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865c21db-7403-433a-b306-c34726b80124-config-volume\") pod \"coredns-5dd5756b68-wm9rk\" (UID: \"865c21db-7403-433a-b306-c34726b80124\") " pod="kube-system/coredns-5dd5756b68-wm9rk"
	Oct 25 09:11:14 old-k8s-version-959110 kubelet[1388]: I1025 09:11:14.921237    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hw9r\" (UniqueName: \"kubernetes.io/projected/865c21db-7403-433a-b306-c34726b80124-kube-api-access-2hw9r\") pod \"coredns-5dd5756b68-wm9rk\" (UID: \"865c21db-7403-433a-b306-c34726b80124\") " pod="kube-system/coredns-5dd5756b68-wm9rk"
	Oct 25 09:11:16 old-k8s-version-959110 kubelet[1388]: I1025 09:11:16.049901    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.0498398 podCreationTimestamp="2025-10-25 09:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:11:16.049801157 +0000 UTC m=+26.220914466" watchObservedRunningTime="2025-10-25 09:11:16.0498398 +0000 UTC m=+26.220953108"
	Oct 25 09:11:18 old-k8s-version-959110 kubelet[1388]: I1025 09:11:18.010635    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wm9rk" podStartSLOduration=16.010566914 podCreationTimestamp="2025-10-25 09:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:11:16.05967225 +0000 UTC m=+26.230785558" watchObservedRunningTime="2025-10-25 09:11:18.010566914 +0000 UTC m=+28.181680315"
	Oct 25 09:11:18 old-k8s-version-959110 kubelet[1388]: I1025 09:11:18.010938    1388 topology_manager.go:215] "Topology Admit Handler" podUID="2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce" podNamespace="default" podName="busybox"
	Oct 25 09:11:18 old-k8s-version-959110 kubelet[1388]: I1025 09:11:18.040497    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2n2\" (UniqueName: \"kubernetes.io/projected/2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce-kube-api-access-jw2n2\") pod \"busybox\" (UID: \"2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce\") " pod="default/busybox"
	Oct 25 09:11:20 old-k8s-version-959110 kubelet[1388]: I1025 09:11:20.062042    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.370835253 podCreationTimestamp="2025-10-25 09:11:18 +0000 UTC" firstStartedPulling="2025-10-25 09:11:18.331251743 +0000 UTC m=+28.502365034" lastFinishedPulling="2025-10-25 09:11:19.022414089 +0000 UTC m=+29.193527393" observedRunningTime="2025-10-25 09:11:20.061457849 +0000 UTC m=+30.232571158" watchObservedRunningTime="2025-10-25 09:11:20.061997612 +0000 UTC m=+30.233110920"
	
	
	==> storage-provisioner [646cdf2bdb942894792d18c2683c0a9c1fce0092b4c7f8ffbc40450555db5393] <==
	I1025 09:11:15.256368       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:11:15.266703       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:11:15.266777       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:11:15.273440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:11:15.273661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959110_bedbd96f-fed9-4825-b1fa-2a1ab0abaadb!
	I1025 09:11:15.273591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c422531-d5a2-40fe-8114-48f4769b0181", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-959110_bedbd96f-fed9-4825-b1fa-2a1ab0abaadb became leader
	I1025 09:11:15.374575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959110_bedbd96f-fed9-4825-b1fa-2a1ab0abaadb!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-959110 -n old-k8s-version-959110
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-959110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.339321ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:12:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-016092 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-016092 describe deploy/metrics-server -n kube-system: exit status 1 (67.206727ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-016092 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-016092
helpers_test.go:243: (dbg) docker inspect no-preload-016092:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3",
	        "Created": "2025-10-25T09:11:34.405672193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233631,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:11:34.444524702Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/hosts",
	        "LogPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3-json.log",
	        "Name": "/no-preload-016092",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-016092:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-016092",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3",
	                "LowerDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-016092",
	                "Source": "/var/lib/docker/volumes/no-preload-016092/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-016092",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-016092",
	                "name.minikube.sigs.k8s.io": "no-preload-016092",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d4b8350f0b79d47df07c8e138ff1944275c7eda900414dedcd01c3f3ca67d33",
	            "SandboxKey": "/var/run/docker/netns/1d4b8350f0b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-016092": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:8e:45:36:4e:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad973ee26d09cd8afb8873a923280f5e7c7740cd39b31b1cbf19d4d13b83d6e9",
	                    "EndpointID": "90707d91ce9438714ce348d3e61a2dd16a05e625d2c2af6f0daf8940affc5661",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-016092",
	                        "242e1782ecdc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-016092 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-016092 logs -n 25: (1.050527358s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p force-systemd-env-423026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-423026  │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ force-systemd-flag-742570 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-742570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ delete  │ -p force-systemd-flag-742570                                                                                                                                                                                                                  │ force-systemd-flag-742570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ ssh     │ -p NoKubernetes-629442 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-629442       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │                     │
	│ delete  │ -p NoKubernetes-629442                                                                                                                                                                                                                        │ NoKubernetes-629442       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-851718    │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p cert-options-077936 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-423026                                                                                                                                                                                                                   │ force-systemd-env-423026  │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p running-upgrade-462303                                                                                                                                                                                                                     │ running-upgrade-462303    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-047620    │ jenkins │ v1.32.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ cert-options-077936 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ -p cert-options-077936 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p cert-options-077936                                                                                                                                                                                                                        │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:11:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:11:45.420274  235923 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:11:45.420571  235923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:11:45.420581  235923 out.go:374] Setting ErrFile to fd 2...
	I1025 09:11:45.420586  235923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:11:45.420817  235923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:11:45.421294  235923 out.go:368] Setting JSON to false
	I1025 09:11:45.422542  235923 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3253,"bootTime":1761380252,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:11:45.422659  235923 start.go:141] virtualization: kvm guest
	I1025 09:11:45.424728  235923 out.go:179] * [old-k8s-version-959110] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:11:45.425965  235923 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:11:45.425946  235923 notify.go:220] Checking for updates...
	I1025 09:11:45.427341  235923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:11:45.429011  235923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:11:45.430395  235923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:11:45.431570  235923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:11:45.432782  235923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:11:45.434533  235923 config.go:182] Loaded profile config "old-k8s-version-959110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:11:45.436484  235923 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 09:11:45.437619  235923 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:11:45.462490  235923 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:11:45.462622  235923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:11:45.524690  235923 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-25 09:11:45.513670263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:11:45.524879  235923 docker.go:318] overlay module found
	I1025 09:11:45.526740  235923 out.go:179] * Using the docker driver based on existing profile
	I1025 09:11:45.527871  235923 start.go:305] selected driver: docker
	I1025 09:11:45.527887  235923 start.go:925] validating driver "docker" against &{Name:old-k8s-version-959110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-959110 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:45.527987  235923 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:11:45.528611  235923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:11:45.591354  235923 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-25 09:11:45.580732831 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:11:45.591661  235923 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:11:45.591716  235923 cni.go:84] Creating CNI manager for ""
	I1025 09:11:45.591765  235923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:11:45.591813  235923 start.go:349] cluster config:
	{Name:old-k8s-version-959110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-959110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:45.594361  235923 out.go:179] * Starting "old-k8s-version-959110" primary control-plane node in "old-k8s-version-959110" cluster
	I1025 09:11:45.595565  235923 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:11:45.596819  235923 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:11:45.597870  235923 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:11:45.597904  235923 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:11:45.597917  235923 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:11:45.597937  235923 cache.go:58] Caching tarball of preloaded images
	I1025 09:11:45.598031  235923 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:11:45.598046  235923 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 09:11:45.598154  235923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/config.json ...
	I1025 09:11:45.621791  235923 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:11:45.621812  235923 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:11:45.621827  235923 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:11:45.621859  235923 start.go:360] acquireMachinesLock for old-k8s-version-959110: {Name:mka053d5a6656af6851b6c97898bf8e78add1e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:11:45.621934  235923 start.go:364] duration metric: took 51.351µs to acquireMachinesLock for "old-k8s-version-959110"
	I1025 09:11:45.621959  235923 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:11:45.621968  235923 fix.go:54] fixHost starting: 
	I1025 09:11:45.622249  235923 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:45.640453  235923 fix.go:112] recreateIfNeeded on old-k8s-version-959110: state=Stopped err=<nil>
	W1025 09:11:45.640509  235923 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:11:42.943945  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:42.944413  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:43.444032  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:43.444467  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:43.944043  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:43.944515  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:44.444856  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:44.445423  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:44.944043  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:44.944523  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:45.444727  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:45.445176  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:45.943965  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:45.944448  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:46.444066  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:46.444524  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:46.944780  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:46.945243  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:47.444804  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:47.445324  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:45.155851  233042 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.689659285s)
	I1025 09:11:45.155883  233042 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 09:11:45.155912  233042 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:11:45.155960  233042 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:11:48.224542  233042 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.068559907s)
	I1025 09:11:48.224567  233042 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 09:11:48.224594  233042 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:11:48.224695  233042 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:11:45.642228  235923 out.go:252] * Restarting existing docker container for "old-k8s-version-959110" ...
	I1025 09:11:45.642298  235923 cli_runner.go:164] Run: docker start old-k8s-version-959110
	I1025 09:11:45.900544  235923 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:45.921936  235923 kic.go:430] container "old-k8s-version-959110" state is running.
	I1025 09:11:45.922390  235923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-959110
	I1025 09:11:45.944273  235923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/config.json ...
	I1025 09:11:45.944556  235923 machine.go:93] provisionDockerMachine start ...
	I1025 09:11:45.944682  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:45.965973  235923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:45.966206  235923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:11:45.966219  235923 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:11:45.966954  235923 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56696->127.0.0.1:33063: read: connection reset by peer
	I1025 09:11:49.111378  235923 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-959110
	
	I1025 09:11:49.111416  235923 ubuntu.go:182] provisioning hostname "old-k8s-version-959110"
	I1025 09:11:49.111544  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:49.138257  235923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:49.138554  235923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:11:49.138575  235923 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-959110 && echo "old-k8s-version-959110" | sudo tee /etc/hostname
	I1025 09:11:49.306623  235923 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-959110
	
	I1025 09:11:49.306736  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:49.335209  235923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:49.335480  235923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:11:49.335498  235923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-959110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-959110/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-959110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:11:49.479656  235923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:11:49.479691  235923 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:11:49.479733  235923 ubuntu.go:190] setting up certificates
	I1025 09:11:49.479744  235923 provision.go:84] configureAuth start
	I1025 09:11:49.479811  235923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-959110
	I1025 09:11:49.502011  235923 provision.go:143] copyHostCerts
	I1025 09:11:49.502071  235923 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:11:49.502082  235923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:11:49.502165  235923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:11:49.502306  235923 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:11:49.502327  235923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:11:49.502370  235923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:11:49.502486  235923 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:11:49.502496  235923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:11:49.502532  235923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:11:49.502621  235923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-959110 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-959110]
	I1025 09:11:49.652775  235923 provision.go:177] copyRemoteCerts
	I1025 09:11:49.652832  235923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:11:49.652867  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:49.673689  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:49.782875  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 09:11:49.808648  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:11:49.834267  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:11:49.860181  235923 provision.go:87] duration metric: took 380.419144ms to configureAuth
	I1025 09:11:49.860217  235923 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:11:49.860462  235923 config.go:182] Loaded profile config "old-k8s-version-959110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:11:49.860617  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:49.884497  235923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:11:49.884806  235923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:11:49.884841  235923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:11:47.943971  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:47.944457  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:48.444807  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:48.445275  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:48.944825  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:48.945223  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:49.444790  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:49.445257  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:49.944817  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:49.945236  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:50.444922  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:50.445321  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:50.943964  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:50.944387  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:51.444793  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:51.445211  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:51.944772  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:51.945189  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:52.444788  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:52.445196  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:51.528828  235923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:11:51.528857  235923 machine.go:96] duration metric: took 5.58428648s to provisionDockerMachine
	I1025 09:11:51.528872  235923 start.go:293] postStartSetup for "old-k8s-version-959110" (driver="docker")
	I1025 09:11:51.528885  235923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:11:51.528946  235923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:11:51.529018  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:51.549734  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:51.651785  235923 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:11:51.655807  235923 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:11:51.655846  235923 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:11:51.655859  235923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:11:51.655929  235923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:11:51.656029  235923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:11:51.656144  235923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:11:51.664096  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:11:51.682429  235923 start.go:296] duration metric: took 153.540864ms for postStartSetup
	I1025 09:11:51.682526  235923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:11:51.682574  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:51.702047  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:51.801019  235923 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:11:51.805816  235923 fix.go:56] duration metric: took 6.183840822s for fixHost
	I1025 09:11:51.805849  235923 start.go:83] releasing machines lock for "old-k8s-version-959110", held for 6.183900046s
	I1025 09:11:51.805917  235923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-959110
	I1025 09:11:51.824019  235923 ssh_runner.go:195] Run: cat /version.json
	I1025 09:11:51.824074  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:51.824073  235923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:11:51.824141  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:51.845387  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:51.846074  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:51.943466  235923 ssh_runner.go:195] Run: systemctl --version
	I1025 09:11:51.996901  235923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:11:52.032578  235923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:11:52.037325  235923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:11:52.037391  235923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:11:52.045862  235923 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:11:52.045891  235923 start.go:495] detecting cgroup driver to use...
	I1025 09:11:52.045928  235923 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:11:52.045967  235923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:11:52.063657  235923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:11:52.077965  235923 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:11:52.078047  235923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:11:52.095382  235923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:11:52.109842  235923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:11:52.203920  235923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:11:52.282864  235923 docker.go:234] disabling docker service ...
	I1025 09:11:52.282933  235923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:11:52.298144  235923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:11:52.311383  235923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:11:52.395314  235923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:11:52.477788  235923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:11:52.491321  235923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:11:52.506840  235923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 09:11:52.506908  235923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.516424  235923 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:11:52.516477  235923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.526789  235923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.537304  235923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.548088  235923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:11:52.557942  235923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.567869  235923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.577071  235923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:11:52.586538  235923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:11:52.594621  235923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:11:52.602701  235923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:52.684475  235923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:11:52.802006  235923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:11:52.802066  235923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:11:52.806484  235923 start.go:563] Will wait 60s for crictl version
	I1025 09:11:52.806534  235923 ssh_runner.go:195] Run: which crictl
	I1025 09:11:52.810595  235923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:11:52.837690  235923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:11:52.837769  235923 ssh_runner.go:195] Run: crio --version
	I1025 09:11:52.867733  235923 ssh_runner.go:195] Run: crio --version
	I1025 09:11:52.898480  235923 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 09:11:48.761046  233042 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 09:11:48.761094  233042 cache_images.go:124] Successfully loaded all cached images
	I1025 09:11:48.761102  233042 cache_images.go:93] duration metric: took 10.980596256s to LoadCachedImages
	I1025 09:11:48.761117  233042 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:11:48.761235  233042 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-016092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-016092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:11:48.761343  233042 ssh_runner.go:195] Run: crio config
	I1025 09:11:48.808810  233042 cni.go:84] Creating CNI manager for ""
	I1025 09:11:48.808829  233042 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:11:48.808846  233042 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:11:48.808867  233042 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-016092 NodeName:no-preload-016092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:11:48.808975  233042 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-016092"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:11:48.809034  233042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:11:48.817747  233042 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 09:11:48.817811  233042 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 09:11:48.826262  233042 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1025 09:11:48.826353  233042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 09:11:48.826404  233042 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1025 09:11:48.826440  233042 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1025 09:11:48.830679  233042 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 09:11:48.830715  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1025 09:11:49.557567  233042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:11:49.561151  233042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 09:11:49.571514  233042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 09:11:49.571548  233042 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 09:11:49.571587  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1025 09:11:49.580392  233042 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 09:11:49.580445  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1025 09:11:49.927451  233042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:11:49.935817  233042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:11:49.950249  233042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:11:49.966889  233042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1025 09:11:49.980742  233042 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:11:49.984820  233042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:11:49.994893  233042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:50.081666  233042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:11:50.109616  233042 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092 for IP: 192.168.103.2
	I1025 09:11:50.109657  233042 certs.go:195] generating shared ca certs ...
	I1025 09:11:50.109679  233042 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.109837  233042 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:11:50.109903  233042 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:11:50.109918  233042 certs.go:257] generating profile certs ...
	I1025 09:11:50.109985  233042 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.key
	I1025 09:11:50.110003  233042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt with IP's: []
	I1025 09:11:50.213133  233042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt ...
	I1025 09:11:50.213168  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: {Name:mk8c4865bc7cc48eb52d0c1ba0e3abe1cf10e682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.213375  233042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.key ...
	I1025 09:11:50.213393  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.key: {Name:mkfd85a962f9e9728e9f16b264833387cb99d6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.213491  233042 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.key.903ba005
	I1025 09:11:50.213509  233042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.crt.903ba005 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:11:50.266358  233042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.crt.903ba005 ...
	I1025 09:11:50.266413  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.crt.903ba005: {Name:mk51e6bb45bb714d3cb74011a02b262ce4b7ea46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.266596  233042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.key.903ba005 ...
	I1025 09:11:50.266611  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.key.903ba005: {Name:mk02edf64d94cee2808dffbd1c15068d268a4ff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.266704  233042 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.crt.903ba005 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.crt
	I1025 09:11:50.266787  233042 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.key.903ba005 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.key
	I1025 09:11:50.266848  233042 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.key
	I1025 09:11:50.266866  233042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.crt with IP's: []
	I1025 09:11:50.739991  233042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.crt ...
	I1025 09:11:50.740021  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.crt: {Name:mkbc173c26e7e1a0c996519db39da96246974234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.740186  233042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.key ...
	I1025 09:11:50.740198  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.key: {Name:mkd98b23d8cc8a44c81b9674e3e890bcb40214ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:50.740391  233042 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:11:50.740430  233042 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:11:50.740437  233042 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:11:50.740466  233042 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:11:50.740490  233042 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:11:50.740509  233042 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:11:50.740547  233042 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:11:50.741251  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:11:50.760672  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:11:50.779605  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:11:50.798383  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:11:50.817182  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:11:50.836734  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:11:50.855837  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:11:50.874705  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:11:50.892852  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:11:51.007410  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:11:51.028896  233042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:11:51.049472  233042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:11:51.063848  233042 ssh_runner.go:195] Run: openssl version
	I1025 09:11:51.070458  233042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:11:51.080023  233042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:11:51.084350  233042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:11:51.084426  233042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:11:51.120203  233042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:11:51.129441  233042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:11:51.139412  233042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:51.143504  233042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:51.143563  233042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:51.178242  233042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:11:51.187849  233042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:11:51.197089  233042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:11:51.201301  233042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:11:51.201374  233042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:11:51.237820  233042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:11:51.247668  233042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:11:51.251723  233042 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:11:51.251786  233042 kubeadm.go:400] StartCluster: {Name:no-preload-016092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-016092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:51.251860  233042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:11:51.251917  233042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:11:51.280350  233042 cri.go:89] found id: ""
	I1025 09:11:51.280411  233042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:11:51.289150  233042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:11:51.297696  233042 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:11:51.297756  233042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:11:51.306218  233042 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:11:51.306242  233042 kubeadm.go:157] found existing configuration files:
	
	I1025 09:11:51.306292  233042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:11:51.314449  233042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:11:51.314531  233042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:11:51.322336  233042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:11:51.330224  233042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:11:51.330302  233042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:11:51.338079  233042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:11:51.346132  233042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:11:51.346179  233042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:11:51.353818  233042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:11:51.361677  233042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:11:51.361736  233042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:11:51.369544  233042 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:11:51.430977  233042 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:11:51.495552  233042 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:11:52.899674  235923 cli_runner.go:164] Run: docker network inspect old-k8s-version-959110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:11:52.917324  235923 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 09:11:52.921740  235923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:11:52.932035  235923 kubeadm.go:883] updating cluster {Name:old-k8s-version-959110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-959110 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:11:52.933507  235923 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:11:52.933897  235923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:11:52.967053  235923 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:11:52.967080  235923 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:11:52.967131  235923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:11:52.994822  235923 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:11:52.994846  235923 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:11:52.994856  235923 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1025 09:11:52.994960  235923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-959110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-959110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:11:52.995038  235923 ssh_runner.go:195] Run: crio config
	I1025 09:11:53.044462  235923 cni.go:84] Creating CNI manager for ""
	I1025 09:11:53.044484  235923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:11:53.044504  235923 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:11:53.044525  235923 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-959110 NodeName:old-k8s-version-959110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:11:53.044690  235923 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-959110"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:11:53.044751  235923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 09:11:53.054005  235923 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:11:53.054075  235923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:11:53.062230  235923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 09:11:53.075684  235923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:11:53.089031  235923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1025 09:11:53.101995  235923 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:11:53.106344  235923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:11:53.116564  235923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:53.198390  235923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:11:53.233620  235923 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110 for IP: 192.168.94.2
	I1025 09:11:53.233659  235923 certs.go:195] generating shared ca certs ...
	I1025 09:11:53.233682  235923 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:53.233851  235923 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:11:53.233908  235923 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:11:53.233922  235923 certs.go:257] generating profile certs ...
	I1025 09:11:53.234037  235923 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/client.key
	I1025 09:11:53.234103  235923 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/apiserver.key.a93c3162
	I1025 09:11:53.234147  235923 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/proxy-client.key
	I1025 09:11:53.234291  235923 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:11:53.234328  235923 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:11:53.234340  235923 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:11:53.234374  235923 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:11:53.234405  235923 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:11:53.234433  235923 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:11:53.234488  235923 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:11:53.235207  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:11:53.256779  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:11:53.278048  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:11:53.298833  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:11:53.322123  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 09:11:53.346398  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:11:53.363578  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:11:53.380615  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:11:53.399011  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:11:53.417123  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:11:53.435474  235923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:11:53.454487  235923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:11:53.467958  235923 ssh_runner.go:195] Run: openssl version
	I1025 09:11:53.474414  235923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:11:53.484230  235923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:53.488304  235923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:53.488363  235923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:11:53.523030  235923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:11:53.531799  235923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:11:53.541091  235923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:11:53.545167  235923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:11:53.545225  235923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:11:53.580015  235923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:11:53.588387  235923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:11:53.596998  235923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:11:53.600903  235923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:11:53.600963  235923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:11:53.636924  235923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:11:53.645843  235923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:11:53.650001  235923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:11:53.684942  235923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:11:53.720478  235923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:11:53.764940  235923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:11:53.808755  235923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:11:53.862616  235923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:11:53.922132  235923 kubeadm.go:400] StartCluster: {Name:old-k8s-version-959110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-959110 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:11:53.922233  235923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:11:53.922287  235923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:11:53.959189  235923 cri.go:89] found id: "e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e"
	I1025 09:11:53.959212  235923 cri.go:89] found id: "9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f"
	I1025 09:11:53.959218  235923 cri.go:89] found id: "3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8"
	I1025 09:11:53.959223  235923 cri.go:89] found id: "7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d"
	I1025 09:11:53.959227  235923 cri.go:89] found id: ""
	I1025 09:11:53.959275  235923 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:11:53.972414  235923 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:11:53Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:11:53.972488  235923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:11:53.981207  235923 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:11:53.981227  235923 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:11:53.981279  235923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:11:53.990071  235923 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:11:53.991066  235923 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-959110" does not appear in /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:11:53.991513  235923 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-5966/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-959110" cluster setting kubeconfig missing "old-k8s-version-959110" context setting]
	I1025 09:11:53.992341  235923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:53.994350  235923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:11:54.003852  235923 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 09:11:54.003894  235923 kubeadm.go:601] duration metric: took 22.66018ms to restartPrimaryControlPlane
	I1025 09:11:54.003906  235923 kubeadm.go:402] duration metric: took 81.782241ms to StartCluster
	I1025 09:11:54.003924  235923 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:54.003999  235923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:11:54.005129  235923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:11:54.005346  235923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:11:54.005597  235923 config.go:182] Loaded profile config "old-k8s-version-959110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:11:54.005672  235923 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:11:54.005754  235923 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-959110"
	I1025 09:11:54.005774  235923 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-959110"
	W1025 09:11:54.005784  235923 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:11:54.005811  235923 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:11:54.006294  235923 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:54.006382  235923 addons.go:69] Setting dashboard=true in profile "old-k8s-version-959110"
	I1025 09:11:54.006421  235923 addons.go:238] Setting addon dashboard=true in "old-k8s-version-959110"
	W1025 09:11:54.006435  235923 addons.go:247] addon dashboard should already be in state true
	I1025 09:11:54.006468  235923 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:11:54.006554  235923 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-959110"
	I1025 09:11:54.006577  235923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-959110"
	I1025 09:11:54.006884  235923 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:54.006959  235923 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:54.007673  235923 out.go:179] * Verifying Kubernetes components...
	I1025 09:11:54.011001  235923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:11:54.036819  235923 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-959110"
	W1025 09:11:54.036901  235923 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:11:54.036945  235923 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:11:54.037003  235923 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:11:54.037057  235923 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:11:54.037488  235923 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:11:54.038406  235923 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:11:54.038426  235923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:11:54.038483  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:54.043225  235923 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:11:54.044434  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:11:54.044452  235923 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:11:54.044508  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:54.070172  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:54.073252  235923 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:11:54.073276  235923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:11:54.073349  235923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:11:54.079171  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:54.102817  235923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:11:54.206397  235923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:11:54.206817  235923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:11:54.221368  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:11:54.221456  235923 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:11:54.226993  235923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:11:54.232151  235923 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-959110" to be "Ready" ...
	I1025 09:11:54.242062  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:11:54.242133  235923 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:11:54.260211  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:11:54.260288  235923 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:11:54.277457  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:11:54.277482  235923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:11:54.297939  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:11:54.297967  235923 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:11:54.318071  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:11:54.318095  235923 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:11:54.334397  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:11:54.334425  235923 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:11:54.351538  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:11:54.351565  235923 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:11:54.366514  235923 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:11:54.366547  235923 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:11:54.380609  235923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:11:52.943870  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:52.944282  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:53.444813  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:53.445214  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:53.943962  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:53.944449  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:54.444815  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:54.445273  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:54.944796  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:11:54.945223  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:11:55.443933  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:11:55.444011  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:11:55.471573  225660 cri.go:89] found id: "de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f"
	I1025 09:11:55.471596  225660 cri.go:89] found id: ""
	I1025 09:11:55.471604  225660 logs.go:282] 1 containers: [de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f]
	I1025 09:11:55.471684  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:11:55.475797  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:11:55.475859  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:11:55.502370  225660 cri.go:89] found id: ""
	I1025 09:11:55.502398  225660 logs.go:282] 0 containers: []
	W1025 09:11:55.502406  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:11:55.502412  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:11:55.502461  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:11:55.533594  225660 cri.go:89] found id: ""
	I1025 09:11:55.533627  225660 logs.go:282] 0 containers: []
	W1025 09:11:55.533636  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:11:55.533673  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:11:55.533734  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:11:55.562730  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:11:55.562759  225660 cri.go:89] found id: ""
	I1025 09:11:55.562770  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:11:55.562828  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:11:55.566897  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:11:55.566957  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:11:55.594029  225660 cri.go:89] found id: ""
	I1025 09:11:55.594057  225660 logs.go:282] 0 containers: []
	W1025 09:11:55.594067  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:11:55.594075  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:11:55.594118  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:11:55.627838  225660 cri.go:89] found id: "0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:11:55.627864  225660 cri.go:89] found id: ""
	I1025 09:11:55.627874  225660 logs.go:282] 1 containers: [0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3]
	I1025 09:11:55.627933  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:11:55.632702  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:11:55.632779  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:11:55.671061  225660 cri.go:89] found id: ""
	I1025 09:11:55.671089  225660 logs.go:282] 0 containers: []
	W1025 09:11:55.671100  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:11:55.671108  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:11:55.671220  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:11:55.705115  225660 cri.go:89] found id: ""
	I1025 09:11:55.705145  225660 logs.go:282] 0 containers: []
	W1025 09:11:55.705155  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:11:55.705167  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:11:55.705182  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:11:55.757117  225660 logs.go:123] Gathering logs for kube-controller-manager [0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3] ...
	I1025 09:11:55.757157  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:11:55.788882  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:11:55.788922  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:11:55.833363  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:11:55.833475  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:11:55.872504  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:11:55.872530  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:11:55.954391  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:11:55.954425  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:11:55.975725  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:11:55.975757  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 09:11:56.701065  235923 node_ready.go:49] node "old-k8s-version-959110" is "Ready"
	I1025 09:11:56.701104  235923 node_ready.go:38] duration metric: took 2.46890555s for node "old-k8s-version-959110" to be "Ready" ...
	I1025 09:11:56.701120  235923 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:11:56.701172  235923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:11:57.536686  235923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.329780846s)
	I1025 09:11:57.536711  235923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.309640413s)
	I1025 09:11:58.056578  235923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.675911723s)
	I1025 09:11:58.056850  235923 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.355632438s)
	I1025 09:11:58.056875  235923 api_server.go:72] duration metric: took 4.051507192s to wait for apiserver process to appear ...
	I1025 09:11:58.056882  235923 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:11:58.056900  235923 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 09:11:58.059160  235923 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-959110 addons enable metrics-server
	
	I1025 09:11:58.060621  235923 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 09:11:58.062138  235923 addons.go:514] duration metric: took 4.05646681s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 09:11:58.062345  235923 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 09:11:58.063996  235923 api_server.go:141] control plane version: v1.28.0
	I1025 09:11:58.064047  235923 api_server.go:131] duration metric: took 7.158681ms to wait for apiserver health ...
	I1025 09:11:58.064056  235923 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:11:58.069025  235923 system_pods.go:59] 8 kube-system pods found
	I1025 09:11:58.069063  235923 system_pods.go:61] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:11:58.069074  235923 system_pods.go:61] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:11:58.069085  235923 system_pods.go:61] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:11:58.069094  235923 system_pods.go:61] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:11:58.069110  235923 system_pods.go:61] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:11:58.069118  235923 system_pods.go:61] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:11:58.069129  235923 system_pods.go:61] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:11:58.069140  235923 system_pods.go:61] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:11:58.069147  235923 system_pods.go:74] duration metric: took 5.085207ms to wait for pod list to return data ...
	I1025 09:11:58.069160  235923 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:11:58.071701  235923 default_sa.go:45] found service account: "default"
	I1025 09:11:58.071723  235923 default_sa.go:55] duration metric: took 2.556538ms for default service account to be created ...
	I1025 09:11:58.071734  235923 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:11:58.077345  235923 system_pods.go:86] 8 kube-system pods found
	I1025 09:11:58.077378  235923 system_pods.go:89] "coredns-5dd5756b68-wm9rk" [865c21db-7403-433a-b306-c34726b80124] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:11:58.077393  235923 system_pods.go:89] "etcd-old-k8s-version-959110" [be4c6227-9c8c-4f98-8c9e-739c4c922ee8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:11:58.077445  235923 system_pods.go:89] "kindnet-gq9q4" [7ea77cbc-ce8d-488d-8ced-0328e783cba0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:11:58.077458  235923 system_pods.go:89] "kube-apiserver-old-k8s-version-959110" [fcba789f-8536-4ef7-8516-ddcd2ea91609] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:11:58.077473  235923 system_pods.go:89] "kube-controller-manager-old-k8s-version-959110" [d4bd9320-ac8b-4669-ae0e-b1d742f172a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:11:58.077487  235923 system_pods.go:89] "kube-proxy-zrfv4" [5deb9893-69f7-459d-87c3-30ecc26ca937] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:11:58.077498  235923 system_pods.go:89] "kube-scheduler-old-k8s-version-959110" [f53af926-4da1-40e2-ac93-a045432d16b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:11:58.077516  235923 system_pods.go:89] "storage-provisioner" [e3046c99-91ff-4a4f-9bf2-cb82470c9b75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:11:58.077527  235923 system_pods.go:126] duration metric: took 5.787249ms to wait for k8s-apps to be running ...
	I1025 09:11:58.077539  235923 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:11:58.077585  235923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:11:58.094955  235923 system_svc.go:56] duration metric: took 17.407161ms WaitForService to wait for kubelet
	I1025 09:11:58.094984  235923 kubeadm.go:586] duration metric: took 4.089615964s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:11:58.095003  235923 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:11:58.098315  235923 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:11:58.098347  235923 node_conditions.go:123] node cpu capacity is 8
	I1025 09:11:58.098363  235923 node_conditions.go:105] duration metric: took 3.353406ms to run NodePressure ...
	I1025 09:11:58.098376  235923 start.go:241] waiting for startup goroutines ...
	I1025 09:11:58.098395  235923 start.go:246] waiting for cluster config update ...
	I1025 09:11:58.098416  235923 start.go:255] writing updated cluster config ...
	I1025 09:11:58.098746  235923 ssh_runner.go:195] Run: rm -f paused
	I1025 09:11:58.103573  235923 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:11:58.108986  235923 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wm9rk" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:12:00.115914  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	I1025 09:12:02.163005  233042 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:12:02.163089  233042 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:12:02.163167  233042 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:12:02.163258  233042 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:12:02.163317  233042 kubeadm.go:318] OS: Linux
	I1025 09:12:02.163377  233042 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:12:02.163421  233042 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:12:02.163468  233042 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:12:02.163510  233042 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:12:02.163551  233042 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:12:02.163606  233042 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:12:02.163677  233042 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:12:02.163725  233042 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:12:02.163792  233042 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:12:02.163917  233042 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:12:02.164071  233042 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:12:02.164154  233042 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:12:02.166091  233042 out.go:252]   - Generating certificates and keys ...
	I1025 09:12:02.166160  233042 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:12:02.166217  233042 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:12:02.166285  233042 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:12:02.166352  233042 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:12:02.166404  233042 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:12:02.166447  233042 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:12:02.166535  233042 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:12:02.166760  233042 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-016092] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:12:02.166814  233042 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:12:02.166968  233042 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-016092] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:12:02.167081  233042 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:12:02.167146  233042 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:12:02.167191  233042 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:12:02.167269  233042 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:12:02.167343  233042 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:12:02.167399  233042 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:12:02.167445  233042 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:12:02.167511  233042 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:12:02.167564  233042 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:12:02.167684  233042 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:12:02.167774  233042 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:12:02.169145  233042 out.go:252]   - Booting up control plane ...
	I1025 09:12:02.169225  233042 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:12:02.169294  233042 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:12:02.169357  233042 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:12:02.169459  233042 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:12:02.169554  233042 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:12:02.169699  233042 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:12:02.169774  233042 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:12:02.169809  233042 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:12:02.169934  233042 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:12:02.170030  233042 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:12:02.170083  233042 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.774346ms
	I1025 09:12:02.170205  233042 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:12:02.170314  233042 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1025 09:12:02.170456  233042 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:12:02.170575  233042 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:12:02.170689  233042 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.421969207s
	I1025 09:12:02.170752  233042 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.542971645s
	I1025 09:12:02.170819  233042 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501085914s
	I1025 09:12:02.170924  233042 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:12:02.171039  233042 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:12:02.171089  233042 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:12:02.171258  233042 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-016092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:12:02.171317  233042 kubeadm.go:318] [bootstrap-token] Using token: d1fv77.gqxjhvi5ymp6h3lk
	I1025 09:12:02.172967  233042 out.go:252]   - Configuring RBAC rules ...
	I1025 09:12:02.173089  233042 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:12:02.173193  233042 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:12:02.173334  233042 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:12:02.173452  233042 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:12:02.173599  233042 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:12:02.173764  233042 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:12:02.173905  233042 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:12:02.173979  233042 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:12:02.174057  233042 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:12:02.174151  233042 kubeadm.go:318] 
	I1025 09:12:02.174246  233042 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:12:02.174256  233042 kubeadm.go:318] 
	I1025 09:12:02.174405  233042 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:12:02.174417  233042 kubeadm.go:318] 
	I1025 09:12:02.174453  233042 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:12:02.174545  233042 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:12:02.174622  233042 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:12:02.174631  233042 kubeadm.go:318] 
	I1025 09:12:02.174720  233042 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:12:02.174739  233042 kubeadm.go:318] 
	I1025 09:12:02.174806  233042 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:12:02.174818  233042 kubeadm.go:318] 
	I1025 09:12:02.174897  233042 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:12:02.174994  233042 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:12:02.175095  233042 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:12:02.175109  233042 kubeadm.go:318] 
	I1025 09:12:02.175214  233042 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:12:02.175321  233042 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:12:02.175330  233042 kubeadm.go:318] 
	I1025 09:12:02.175437  233042 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token d1fv77.gqxjhvi5ymp6h3lk \
	I1025 09:12:02.175535  233042 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:12:02.175555  233042 kubeadm.go:318] 	--control-plane 
	I1025 09:12:02.175559  233042 kubeadm.go:318] 
	I1025 09:12:02.175671  233042 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:12:02.175684  233042 kubeadm.go:318] 
	I1025 09:12:02.175808  233042 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token d1fv77.gqxjhvi5ymp6h3lk \
	I1025 09:12:02.175974  233042 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:12:02.175993  233042 cni.go:84] Creating CNI manager for ""
	I1025 09:12:02.176003  233042 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:12:02.178203  233042 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:12:02.179260  233042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:12:02.183690  233042 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:12:02.183705  233042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:12:02.197305  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:12:02.410022  233042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:12:02.410093  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:02.410139  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-016092 minikube.k8s.io/updated_at=2025_10_25T09_12_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=no-preload-016092 minikube.k8s.io/primary=true
	I1025 09:12:02.423025  233042 ops.go:34] apiserver oom_adj: -16
	I1025 09:12:02.504299  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:03.005381  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:03.504922  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1025 09:12:02.614085  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:04.614619  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	I1025 09:12:04.005199  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:04.505205  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:05.005231  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:05.504846  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:06.004811  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:06.505246  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:07.005168  233042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:12:07.074239  233042 kubeadm.go:1113] duration metric: took 4.664210657s to wait for elevateKubeSystemPrivileges
	I1025 09:12:07.074284  233042 kubeadm.go:402] duration metric: took 15.822501116s to StartCluster
	I1025 09:12:07.074309  233042 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:07.074380  233042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:12:07.076253  233042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:12:07.076535  233042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:12:07.076568  233042 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:12:07.076656  233042 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:12:07.076765  233042 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:12:07.076776  233042 addons.go:69] Setting default-storageclass=true in profile "no-preload-016092"
	I1025 09:12:07.076806  233042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-016092"
	I1025 09:12:07.076763  233042 addons.go:69] Setting storage-provisioner=true in profile "no-preload-016092"
	I1025 09:12:07.076960  233042 addons.go:238] Setting addon storage-provisioner=true in "no-preload-016092"
	I1025 09:12:07.077002  233042 host.go:66] Checking if "no-preload-016092" exists ...
	I1025 09:12:07.077227  233042 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:12:07.077564  233042 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:12:07.079204  233042 out.go:179] * Verifying Kubernetes components...
	I1025 09:12:07.081907  233042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:07.101880  233042 addons.go:238] Setting addon default-storageclass=true in "no-preload-016092"
	I1025 09:12:07.101918  233042 host.go:66] Checking if "no-preload-016092" exists ...
	I1025 09:12:07.102386  233042 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:12:07.105152  233042 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:12:07.107376  233042 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:12:07.107413  233042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:12:07.107479  233042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:07.138664  233042 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:12:07.138692  233042 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:12:07.138755  233042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:07.142364  233042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:07.161557  233042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:07.172407  233042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:12:07.220226  233042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:12:07.259386  233042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:12:07.276999  233042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:12:07.355374  233042 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 09:12:07.357681  233042 node_ready.go:35] waiting up to 6m0s for node "no-preload-016092" to be "Ready" ...
	I1025 09:12:07.583124  233042 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:12:06.050562  225660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.074782112s)
	W1025 09:12:06.050600  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1025 09:12:06.050610  225660 logs.go:123] Gathering logs for kube-apiserver [de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f] ...
	I1025 09:12:06.050632  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f"
	I1025 09:12:07.584201  233042 addons.go:514] duration metric: took 507.542288ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:12:07.861192  233042 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-016092" context rescaled to 1 replicas
	W1025 09:12:06.615133  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:08.616142  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	I1025 09:12:08.588796  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1025 09:12:09.360793  233042 node_ready.go:57] node "no-preload-016092" has "Ready":"False" status (will retry)
	W1025 09:12:11.361398  233042 node_ready.go:57] node "no-preload-016092" has "Ready":"False" status (will retry)
	W1025 09:12:11.115508  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:13.116142  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	I1025 09:12:13.589249  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:12:13.589329  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:12:13.589397  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:12:13.625978  225660 cri.go:89] found id: "112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:13.626003  225660 cri.go:89] found id: "de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f"
	I1025 09:12:13.626024  225660 cri.go:89] found id: ""
	I1025 09:12:13.626033  225660 logs.go:282] 2 containers: [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9 de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f]
	I1025 09:12:13.626096  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:13.630932  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:13.635478  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:12:13.635553  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:12:13.667995  225660 cri.go:89] found id: ""
	I1025 09:12:13.668020  225660 logs.go:282] 0 containers: []
	W1025 09:12:13.668030  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:12:13.668037  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:12:13.668107  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:12:13.702615  225660 cri.go:89] found id: ""
	I1025 09:12:13.702656  225660 logs.go:282] 0 containers: []
	W1025 09:12:13.702668  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:12:13.702676  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:12:13.702744  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:12:13.737523  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:13.737547  225660 cri.go:89] found id: ""
	I1025 09:12:13.737557  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:12:13.737621  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:13.742347  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:12:13.742418  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:12:13.775327  225660 cri.go:89] found id: ""
	I1025 09:12:13.775367  225660 logs.go:282] 0 containers: []
	W1025 09:12:13.775378  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:12:13.775386  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:12:13.775442  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:12:13.811488  225660 cri.go:89] found id: "3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:13.811516  225660 cri.go:89] found id: "0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:12:13.811519  225660 cri.go:89] found id: ""
	I1025 09:12:13.811526  225660 logs.go:282] 2 containers: [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3]
	I1025 09:12:13.811593  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:13.815884  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:13.820065  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:12:13.820127  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:12:13.852159  225660 cri.go:89] found id: ""
	I1025 09:12:13.852187  225660 logs.go:282] 0 containers: []
	W1025 09:12:13.852198  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:12:13.852205  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:12:13.852264  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:12:13.888044  225660 cri.go:89] found id: ""
	I1025 09:12:13.888073  225660 logs.go:282] 0 containers: []
	W1025 09:12:13.888084  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:12:13.888106  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:12:13.888119  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:12:13.980997  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:12:13.981037  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:12:13.861869  233042 node_ready.go:57] node "no-preload-016092" has "Ready":"False" status (will retry)
	W1025 09:12:16.361491  233042 node_ready.go:57] node "no-preload-016092" has "Ready":"False" status (will retry)
	W1025 09:12:15.615546  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:18.115902  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:18.861286  233042 node_ready.go:57] node "no-preload-016092" has "Ready":"False" status (will retry)
	I1025 09:12:19.861459  233042 node_ready.go:49] node "no-preload-016092" is "Ready"
	I1025 09:12:19.861491  233042 node_ready.go:38] duration metric: took 12.503777937s for node "no-preload-016092" to be "Ready" ...
	I1025 09:12:19.861507  233042 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:12:19.861565  233042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:12:19.874350  233042 api_server.go:72] duration metric: took 12.797732613s to wait for apiserver process to appear ...
	I1025 09:12:19.874396  233042 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:12:19.874420  233042 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:12:19.878659  233042 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:12:19.879568  233042 api_server.go:141] control plane version: v1.34.1
	I1025 09:12:19.879592  233042 api_server.go:131] duration metric: took 5.188523ms to wait for apiserver health ...
	I1025 09:12:19.879600  233042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:12:19.882929  233042 system_pods.go:59] 8 kube-system pods found
	I1025 09:12:19.882959  233042 system_pods.go:61] "coredns-66bc5c9577-g85s4" [add063fb-dbe3-4105-a73c-96db0d8a222e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:12:19.882965  233042 system_pods.go:61] "etcd-no-preload-016092" [0f6e4cb6-316b-43df-a462-23149355ec0f] Running
	I1025 09:12:19.882972  233042 system_pods.go:61] "kindnet-mjnmk" [3ee7c0f6-4cc5-49c6-a75d-d872e4815e55] Running
	I1025 09:12:19.882976  233042 system_pods.go:61] "kube-apiserver-no-preload-016092" [fc491147-f261-4ca8-b21b-d27eaa4e5fec] Running
	I1025 09:12:19.882979  233042 system_pods.go:61] "kube-controller-manager-no-preload-016092" [3b9d1750-5255-49b9-9d5d-16435d3f9ddb] Running
	I1025 09:12:19.882983  233042 system_pods.go:61] "kube-proxy-h4nh4" [2e6f1992-cab6-4299-b446-192d73e4d08f] Running
	I1025 09:12:19.882986  233042 system_pods.go:61] "kube-scheduler-no-preload-016092" [3bb354fe-bc77-46fe-92a5-e47fb1abe772] Running
	I1025 09:12:19.882990  233042 system_pods.go:61] "storage-provisioner" [c0f58e72-c483-4f5f-8073-ab1db7b48dee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:12:19.883000  233042 system_pods.go:74] duration metric: took 3.394678ms to wait for pod list to return data ...
	I1025 09:12:19.883011  233042 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:12:19.885482  233042 default_sa.go:45] found service account: "default"
	I1025 09:12:19.885502  233042 default_sa.go:55] duration metric: took 2.486114ms for default service account to be created ...
	I1025 09:12:19.885520  233042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:12:19.888488  233042 system_pods.go:86] 8 kube-system pods found
	I1025 09:12:19.888514  233042 system_pods.go:89] "coredns-66bc5c9577-g85s4" [add063fb-dbe3-4105-a73c-96db0d8a222e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:12:19.888519  233042 system_pods.go:89] "etcd-no-preload-016092" [0f6e4cb6-316b-43df-a462-23149355ec0f] Running
	I1025 09:12:19.888526  233042 system_pods.go:89] "kindnet-mjnmk" [3ee7c0f6-4cc5-49c6-a75d-d872e4815e55] Running
	I1025 09:12:19.888529  233042 system_pods.go:89] "kube-apiserver-no-preload-016092" [fc491147-f261-4ca8-b21b-d27eaa4e5fec] Running
	I1025 09:12:19.888533  233042 system_pods.go:89] "kube-controller-manager-no-preload-016092" [3b9d1750-5255-49b9-9d5d-16435d3f9ddb] Running
	I1025 09:12:19.888535  233042 system_pods.go:89] "kube-proxy-h4nh4" [2e6f1992-cab6-4299-b446-192d73e4d08f] Running
	I1025 09:12:19.888539  233042 system_pods.go:89] "kube-scheduler-no-preload-016092" [3bb354fe-bc77-46fe-92a5-e47fb1abe772] Running
	I1025 09:12:19.888543  233042 system_pods.go:89] "storage-provisioner" [c0f58e72-c483-4f5f-8073-ab1db7b48dee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:12:19.888560  233042 retry.go:31] will retry after 206.370457ms: missing components: kube-dns
	I1025 09:12:20.100058  233042 system_pods.go:86] 8 kube-system pods found
	I1025 09:12:20.100096  233042 system_pods.go:89] "coredns-66bc5c9577-g85s4" [add063fb-dbe3-4105-a73c-96db0d8a222e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:12:20.100105  233042 system_pods.go:89] "etcd-no-preload-016092" [0f6e4cb6-316b-43df-a462-23149355ec0f] Running
	I1025 09:12:20.100112  233042 system_pods.go:89] "kindnet-mjnmk" [3ee7c0f6-4cc5-49c6-a75d-d872e4815e55] Running
	I1025 09:12:20.100117  233042 system_pods.go:89] "kube-apiserver-no-preload-016092" [fc491147-f261-4ca8-b21b-d27eaa4e5fec] Running
	I1025 09:12:20.100123  233042 system_pods.go:89] "kube-controller-manager-no-preload-016092" [3b9d1750-5255-49b9-9d5d-16435d3f9ddb] Running
	I1025 09:12:20.100129  233042 system_pods.go:89] "kube-proxy-h4nh4" [2e6f1992-cab6-4299-b446-192d73e4d08f] Running
	I1025 09:12:20.100135  233042 system_pods.go:89] "kube-scheduler-no-preload-016092" [3bb354fe-bc77-46fe-92a5-e47fb1abe772] Running
	I1025 09:12:20.100144  233042 system_pods.go:89] "storage-provisioner" [c0f58e72-c483-4f5f-8073-ab1db7b48dee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:12:20.100171  233042 retry.go:31] will retry after 361.69068ms: missing components: kube-dns
	I1025 09:12:20.466447  233042 system_pods.go:86] 8 kube-system pods found
	I1025 09:12:20.466485  233042 system_pods.go:89] "coredns-66bc5c9577-g85s4" [add063fb-dbe3-4105-a73c-96db0d8a222e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:12:20.466490  233042 system_pods.go:89] "etcd-no-preload-016092" [0f6e4cb6-316b-43df-a462-23149355ec0f] Running
	I1025 09:12:20.466497  233042 system_pods.go:89] "kindnet-mjnmk" [3ee7c0f6-4cc5-49c6-a75d-d872e4815e55] Running
	I1025 09:12:20.466501  233042 system_pods.go:89] "kube-apiserver-no-preload-016092" [fc491147-f261-4ca8-b21b-d27eaa4e5fec] Running
	I1025 09:12:20.466506  233042 system_pods.go:89] "kube-controller-manager-no-preload-016092" [3b9d1750-5255-49b9-9d5d-16435d3f9ddb] Running
	I1025 09:12:20.466512  233042 system_pods.go:89] "kube-proxy-h4nh4" [2e6f1992-cab6-4299-b446-192d73e4d08f] Running
	I1025 09:12:20.466517  233042 system_pods.go:89] "kube-scheduler-no-preload-016092" [3bb354fe-bc77-46fe-92a5-e47fb1abe772] Running
	I1025 09:12:20.466529  233042 system_pods.go:89] "storage-provisioner" [c0f58e72-c483-4f5f-8073-ab1db7b48dee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:12:20.466555  233042 retry.go:31] will retry after 479.211752ms: missing components: kube-dns
	I1025 09:12:20.950376  233042 system_pods.go:86] 8 kube-system pods found
	I1025 09:12:20.950425  233042 system_pods.go:89] "coredns-66bc5c9577-g85s4" [add063fb-dbe3-4105-a73c-96db0d8a222e] Running
	I1025 09:12:20.950433  233042 system_pods.go:89] "etcd-no-preload-016092" [0f6e4cb6-316b-43df-a462-23149355ec0f] Running
	I1025 09:12:20.950439  233042 system_pods.go:89] "kindnet-mjnmk" [3ee7c0f6-4cc5-49c6-a75d-d872e4815e55] Running
	I1025 09:12:20.950445  233042 system_pods.go:89] "kube-apiserver-no-preload-016092" [fc491147-f261-4ca8-b21b-d27eaa4e5fec] Running
	I1025 09:12:20.950451  233042 system_pods.go:89] "kube-controller-manager-no-preload-016092" [3b9d1750-5255-49b9-9d5d-16435d3f9ddb] Running
	I1025 09:12:20.950456  233042 system_pods.go:89] "kube-proxy-h4nh4" [2e6f1992-cab6-4299-b446-192d73e4d08f] Running
	I1025 09:12:20.950461  233042 system_pods.go:89] "kube-scheduler-no-preload-016092" [3bb354fe-bc77-46fe-92a5-e47fb1abe772] Running
	I1025 09:12:20.950465  233042 system_pods.go:89] "storage-provisioner" [c0f58e72-c483-4f5f-8073-ab1db7b48dee] Running
	I1025 09:12:20.950475  233042 system_pods.go:126] duration metric: took 1.064948161s to wait for k8s-apps to be running ...
	I1025 09:12:20.950488  233042 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:12:20.950539  233042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:20.966358  233042 system_svc.go:56] duration metric: took 15.860675ms WaitForService to wait for kubelet
	I1025 09:12:20.966401  233042 kubeadm.go:586] duration metric: took 13.889788876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:12:20.966424  233042 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:12:20.969715  233042 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:12:20.969745  233042 node_conditions.go:123] node cpu capacity is 8
	I1025 09:12:20.969763  233042 node_conditions.go:105] duration metric: took 3.333333ms to run NodePressure ...
	I1025 09:12:20.969779  233042 start.go:241] waiting for startup goroutines ...
	I1025 09:12:20.969792  233042 start.go:246] waiting for cluster config update ...
	I1025 09:12:20.969807  233042 start.go:255] writing updated cluster config ...
	I1025 09:12:20.970145  233042 ssh_runner.go:195] Run: rm -f paused
	I1025 09:12:20.974546  233042 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:12:20.978339  233042 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g85s4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:20.982901  233042 pod_ready.go:94] pod "coredns-66bc5c9577-g85s4" is "Ready"
	I1025 09:12:20.982926  233042 pod_ready.go:86] duration metric: took 4.47725ms for pod "coredns-66bc5c9577-g85s4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:20.985079  233042 pod_ready.go:83] waiting for pod "etcd-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:20.989357  233042 pod_ready.go:94] pod "etcd-no-preload-016092" is "Ready"
	I1025 09:12:20.989382  233042 pod_ready.go:86] duration metric: took 4.278096ms for pod "etcd-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:20.991793  233042 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:20.996297  233042 pod_ready.go:94] pod "kube-apiserver-no-preload-016092" is "Ready"
	I1025 09:12:20.996328  233042 pod_ready.go:86] duration metric: took 4.507477ms for pod "kube-apiserver-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:20.998375  233042 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:21.378741  233042 pod_ready.go:94] pod "kube-controller-manager-no-preload-016092" is "Ready"
	I1025 09:12:21.378777  233042 pod_ready.go:86] duration metric: took 380.380102ms for pod "kube-controller-manager-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:21.579159  233042 pod_ready.go:83] waiting for pod "kube-proxy-h4nh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:21.978496  233042 pod_ready.go:94] pod "kube-proxy-h4nh4" is "Ready"
	I1025 09:12:21.978523  233042 pod_ready.go:86] duration metric: took 399.337727ms for pod "kube-proxy-h4nh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:22.178908  233042 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:22.578381  233042 pod_ready.go:94] pod "kube-scheduler-no-preload-016092" is "Ready"
	I1025 09:12:22.578417  233042 pod_ready.go:86] duration metric: took 399.484126ms for pod "kube-scheduler-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:12:22.578432  233042 pod_ready.go:40] duration metric: took 1.603859091s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:12:22.623517  233042 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:12:22.625414  233042 out.go:179] * Done! kubectl is now configured to use "no-preload-016092" cluster and "default" namespace by default
	I1025 09:12:18.092706  225660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.111646111s)
	W1025 09:12:18.092753  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59962->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:59962->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1025 09:12:18.092761  225660 logs.go:123] Gathering logs for kube-apiserver [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9] ...
	I1025 09:12:18.092773  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:18.128488  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:12:18.128518  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:18.172731  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:12:18.172762  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:12:18.205012  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:12:18.205048  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:12:18.220355  225660 logs.go:123] Gathering logs for kube-apiserver [de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f] ...
	I1025 09:12:18.220384  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f"
	W1025 09:12:18.247311  225660 logs.go:130] failed kube-apiserver [de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f": Process exited with status 1
	stdout:
	
	stderr:
	E1025 09:12:18.244978    1445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f\": container with ID starting with de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f not found: ID does not exist" containerID="de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f"
	time="2025-10-25T09:12:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f\": container with ID starting with de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f not found: ID does not exist"
	 output: 
	** stderr ** 
	E1025 09:12:18.244978    1445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f\": container with ID starting with de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f not found: ID does not exist" containerID="de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f"
	time="2025-10-25T09:12:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f\": container with ID starting with de810c649d94e5c1ef4c9fb5904e436396445fb5b1becc2000e0dac4a0f4032f not found: ID does not exist"
	
	** /stderr **
	I1025 09:12:18.247343  225660 logs.go:123] Gathering logs for kube-controller-manager [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6] ...
	I1025 09:12:18.247360  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:18.272687  225660 logs.go:123] Gathering logs for kube-controller-manager [0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3] ...
	I1025 09:12:18.272716  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:12:18.299706  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:12:18.299735  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:12:20.843109  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:12:20.843548  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:12:20.843671  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:12:20.843736  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:12:20.872546  225660 cri.go:89] found id: "112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:20.872571  225660 cri.go:89] found id: ""
	I1025 09:12:20.872586  225660 logs.go:282] 1 containers: [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9]
	I1025 09:12:20.872653  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:20.876936  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:12:20.877019  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:12:20.904636  225660 cri.go:89] found id: ""
	I1025 09:12:20.904676  225660 logs.go:282] 0 containers: []
	W1025 09:12:20.904689  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:12:20.904697  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:12:20.904772  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:12:20.932398  225660 cri.go:89] found id: ""
	I1025 09:12:20.932425  225660 logs.go:282] 0 containers: []
	W1025 09:12:20.932437  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:12:20.932445  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:12:20.932500  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:12:20.962590  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:20.962619  225660 cri.go:89] found id: ""
	I1025 09:12:20.962628  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:12:20.962703  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:20.966759  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:12:20.966828  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:12:21.000161  225660 cri.go:89] found id: ""
	I1025 09:12:21.000187  225660 logs.go:282] 0 containers: []
	W1025 09:12:21.000198  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:12:21.000206  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:12:21.000260  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:12:21.027849  225660 cri.go:89] found id: "3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:21.027870  225660 cri.go:89] found id: "0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:12:21.027874  225660 cri.go:89] found id: ""
	I1025 09:12:21.027881  225660 logs.go:282] 2 containers: [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3]
	I1025 09:12:21.027928  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:21.032217  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:21.035936  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:12:21.036007  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:12:21.062126  225660 cri.go:89] found id: ""
	I1025 09:12:21.062152  225660 logs.go:282] 0 containers: []
	W1025 09:12:21.062164  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:12:21.062170  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:12:21.062217  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:12:21.089684  225660 cri.go:89] found id: ""
	I1025 09:12:21.089715  225660 logs.go:282] 0 containers: []
	W1025 09:12:21.089727  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:12:21.089747  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:12:21.089762  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:12:21.159399  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:12:21.159438  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:12:21.174520  225660 logs.go:123] Gathering logs for kube-apiserver [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9] ...
	I1025 09:12:21.174551  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:21.209130  225660 logs.go:123] Gathering logs for kube-controller-manager [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6] ...
	I1025 09:12:21.209164  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:21.237211  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:12:21.237250  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:12:21.279169  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:12:21.279201  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:12:21.310048  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:12:21.310077  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:12:21.368463  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:12:21.368486  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:12:21.368502  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:21.414244  225660 logs.go:123] Gathering logs for kube-controller-manager [0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3] ...
	I1025 09:12:21.414277  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	W1025 09:12:20.614344  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:22.615154  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:24.615298  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	I1025 09:12:23.945019  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:12:23.945468  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:12:23.945519  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:12:23.945591  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:12:23.973235  225660 cri.go:89] found id: "112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:23.973261  225660 cri.go:89] found id: ""
	I1025 09:12:23.973268  225660 logs.go:282] 1 containers: [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9]
	I1025 09:12:23.973328  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:23.977472  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:12:23.977548  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:12:24.006390  225660 cri.go:89] found id: ""
	I1025 09:12:24.006415  225660 logs.go:282] 0 containers: []
	W1025 09:12:24.006422  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:12:24.006428  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:12:24.006478  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:12:24.034246  225660 cri.go:89] found id: ""
	I1025 09:12:24.034278  225660 logs.go:282] 0 containers: []
	W1025 09:12:24.034286  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:12:24.034292  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:12:24.034385  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:12:24.062817  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:24.062841  225660 cri.go:89] found id: ""
	I1025 09:12:24.062849  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:12:24.062908  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:24.067036  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:12:24.067088  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:12:24.093901  225660 cri.go:89] found id: ""
	I1025 09:12:24.093927  225660 logs.go:282] 0 containers: []
	W1025 09:12:24.093937  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:12:24.093945  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:12:24.094004  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:12:24.122816  225660 cri.go:89] found id: "3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:24.122840  225660 cri.go:89] found id: "0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:12:24.122844  225660 cri.go:89] found id: ""
	I1025 09:12:24.122851  225660 logs.go:282] 2 containers: [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3]
	I1025 09:12:24.122896  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:24.127134  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:24.130829  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:12:24.130897  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:12:24.158316  225660 cri.go:89] found id: ""
	I1025 09:12:24.158338  225660 logs.go:282] 0 containers: []
	W1025 09:12:24.158345  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:12:24.158351  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:12:24.158395  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:12:24.186698  225660 cri.go:89] found id: ""
	I1025 09:12:24.186727  225660 logs.go:282] 0 containers: []
	W1025 09:12:24.186739  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:12:24.186817  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:12:24.186839  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:12:24.219081  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:12:24.219116  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:12:24.233832  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:12:24.233859  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:12:24.291822  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:12:24.291843  225660 logs.go:123] Gathering logs for kube-controller-manager [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6] ...
	I1025 09:12:24.291871  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:24.319778  225660 logs.go:123] Gathering logs for kube-controller-manager [0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3] ...
	I1025 09:12:24.319805  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0ad1fd54ca530af72def611a946c0581ee8615deab73c1f83f5768516e29caf3"
	I1025 09:12:24.346250  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:12:24.346274  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:12:24.420751  225660 logs.go:123] Gathering logs for kube-apiserver [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9] ...
	I1025 09:12:24.420798  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:24.454705  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:12:24.454744  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:24.504680  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:12:24.504718  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:12:27.048972  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:12:27.049447  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:12:27.049503  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:12:27.049552  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:12:27.079039  225660 cri.go:89] found id: "112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:27.079071  225660 cri.go:89] found id: ""
	I1025 09:12:27.079082  225660 logs.go:282] 1 containers: [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9]
	I1025 09:12:27.079146  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:27.083596  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:12:27.083694  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:12:27.111272  225660 cri.go:89] found id: ""
	I1025 09:12:27.111299  225660 logs.go:282] 0 containers: []
	W1025 09:12:27.111309  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:12:27.111316  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:12:27.111362  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:12:27.142342  225660 cri.go:89] found id: ""
	I1025 09:12:27.142369  225660 logs.go:282] 0 containers: []
	W1025 09:12:27.142379  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:12:27.142387  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:12:27.142463  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:12:27.170963  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:27.170989  225660 cri.go:89] found id: ""
	I1025 09:12:27.170998  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:12:27.171054  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:27.175201  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:12:27.175284  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:12:27.203731  225660 cri.go:89] found id: ""
	I1025 09:12:27.203755  225660 logs.go:282] 0 containers: []
	W1025 09:12:27.203764  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:12:27.203770  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:12:27.203816  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:12:27.233059  225660 cri.go:89] found id: "3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	I1025 09:12:27.233078  225660 cri.go:89] found id: ""
	I1025 09:12:27.233086  225660 logs.go:282] 1 containers: [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6]
	I1025 09:12:27.233145  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:12:27.237257  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:12:27.237374  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:12:27.265270  225660 cri.go:89] found id: ""
	I1025 09:12:27.265311  225660 logs.go:282] 0 containers: []
	W1025 09:12:27.265322  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:12:27.265329  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:12:27.265387  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:12:27.307262  225660 cri.go:89] found id: ""
	I1025 09:12:27.307304  225660 logs.go:282] 0 containers: []
	W1025 09:12:27.307315  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:12:27.307327  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:12:27.307379  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:12:27.349143  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:12:27.349185  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:12:27.379882  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:12:27.379917  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:12:27.452835  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:12:27.452871  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:12:27.468076  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:12:27.468111  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:12:27.526784  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:12:27.526812  225660 logs.go:123] Gathering logs for kube-apiserver [112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9] ...
	I1025 09:12:27.526829  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 112562b7f120b55440e568a29f4d12728eed6c893b1e01365e2c87a808afd2e9"
	I1025 09:12:27.558493  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:12:27.558522  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:12:27.603269  225660 logs.go:123] Gathering logs for kube-controller-manager [3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6] ...
	I1025 09:12:27.603308  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3df8ec9f1dbc4d8e3c053d358906b417b59b481b088aadb9ae669951ca9c70f6"
	W1025 09:12:26.615862  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	W1025 09:12:29.114791  235923 pod_ready.go:104] pod "coredns-5dd5756b68-wm9rk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:12:20 no-preload-016092 crio[777]: time="2025-10-25T09:12:20.012217503Z" level=info msg="Starting container: 848c074d14c29627b53910375dc9a6852743d2a2e272e983a5814976d2a9fe74" id=65ba971e-8bbc-4c24-b1d8-be4a0b9b6f6d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:20 no-preload-016092 crio[777]: time="2025-10-25T09:12:20.01431214Z" level=info msg="Started container" PID=2937 containerID=848c074d14c29627b53910375dc9a6852743d2a2e272e983a5814976d2a9fe74 description=kube-system/coredns-66bc5c9577-g85s4/coredns id=65ba971e-8bbc-4c24-b1d8-be4a0b9b6f6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=40e43aa5f2e1fb679778c50af365f5bed7b6d295fcefee2fc669f3195835eff0
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.086437789Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b59948a3-6e45-4b10-b2a5-f84307dbc117 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.086572606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.091614573Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b7bb63221a11473b8af5626068cadd770ce2f08471b13f290565804e2cbed76f UID:acbe50c4-9fa3-499e-8b25-b374b1be96f9 NetNS:/var/run/netns/10606fd9-b457-4e74-9fe8-f5268138e3fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005209d0}] Aliases:map[]}"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.091666264Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.101303843Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b7bb63221a11473b8af5626068cadd770ce2f08471b13f290565804e2cbed76f UID:acbe50c4-9fa3-499e-8b25-b374b1be96f9 NetNS:/var/run/netns/10606fd9-b457-4e74-9fe8-f5268138e3fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005209d0}] Aliases:map[]}"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.10144491Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.102221477Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.103065069Z" level=info msg="Ran pod sandbox b7bb63221a11473b8af5626068cadd770ce2f08471b13f290565804e2cbed76f with infra container: default/busybox/POD" id=b59948a3-6e45-4b10-b2a5-f84307dbc117 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.104383561Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3e00393a-78fb-4a1d-b4cc-67dffa6826b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.104519869Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3e00393a-78fb-4a1d-b4cc-67dffa6826b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.104568227Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3e00393a-78fb-4a1d-b4cc-67dffa6826b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.105168313Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fc0fbe51-fc9d-444f-80e4-63cff27992a8 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.108615833Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.853875076Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fc0fbe51-fc9d-444f-80e4-63cff27992a8 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.854531677Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=122e87c3-bcc7-4ee3-8a92-31f881d70816 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.855945442Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=709bd1e5-b391-43d3-9aa1-1a572e297d21 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.859348876Z" level=info msg="Creating container: default/busybox/busybox" id=a20b946d-d7fc-4ea4-9e6a-fb0150ed3938 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.859468641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.862810434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.86321338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.890945148Z" level=info msg="Created container 412d6503c92b2507c00ba76e25fe504df54ed667559344721423df45dc1b0f6b: default/busybox/busybox" id=a20b946d-d7fc-4ea4-9e6a-fb0150ed3938 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.891606961Z" level=info msg="Starting container: 412d6503c92b2507c00ba76e25fe504df54ed667559344721423df45dc1b0f6b" id=4db3c36b-3a2e-44fe-a2fe-8e4f7a140aa2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:23 no-preload-016092 crio[777]: time="2025-10-25T09:12:23.893343523Z" level=info msg="Started container" PID=3013 containerID=412d6503c92b2507c00ba76e25fe504df54ed667559344721423df45dc1b0f6b description=default/busybox/busybox id=4db3c36b-3a2e-44fe-a2fe-8e4f7a140aa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7bb63221a11473b8af5626068cadd770ce2f08471b13f290565804e2cbed76f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	412d6503c92b2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   b7bb63221a114       busybox                                     default
	848c074d14c29       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   40e43aa5f2e1f       coredns-66bc5c9577-g85s4                    kube-system
	95c428011c4a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   ad67d7b5fc89c       storage-provisioner                         kube-system
	e8c10589fc2ce       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   b86e41755e06f       kindnet-mjnmk                               kube-system
	21e0ba5ef4d99       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   bb49d7416eaca       kube-proxy-h4nh4                            kube-system
	7a6dafcb4f667       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   9ace782883078       kube-controller-manager-no-preload-016092   kube-system
	aae73454a5131       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   a15ba47fb392d       kube-scheduler-no-preload-016092            kube-system
	9df8d91d80144       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   a7bf95a38d419       kube-apiserver-no-preload-016092            kube-system
	8ddc7065ad2f9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   12be022b216f2       etcd-no-preload-016092                      kube-system
	
	
	==> coredns [848c074d14c29627b53910375dc9a6852743d2a2e272e983a5814976d2a9fe74] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45894 - 33100 "HINFO IN 2798107006843621191.6086764516754126838. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.473977346s
	
	
	==> describe nodes <==
	Name:               no-preload-016092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-016092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=no-preload-016092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_12_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:11:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-016092
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:12:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:12:22 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:12:22 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:12:22 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:12:22 +0000   Sat, 25 Oct 2025 09:12:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-016092
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b1944563-5e07-4c47-8e9f-57e7b42f6bfa
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-g85s4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-016092                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-mjnmk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-016092             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-016092    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-h4nh4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-016092             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node no-preload-016092 event: Registered Node no-preload-016092 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-016092 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [8ddc7065ad2f9634d5c387f8acfaf786513a4c35a9a8f6c809594b3e949ebead] <==
	{"level":"warn","ts":"2025-10-25T09:11:58.013987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.033375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.055535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.064456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.074486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.084496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.091731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.099793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.108260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.117057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.125778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.134230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.141792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.149031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.158450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.174769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.180613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.187095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.204732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.212861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.220633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.241141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.249699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.257571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:11:58.303830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:31 up 54 min,  0 user,  load average: 3.26, 3.34, 2.10
	Linux no-preload-016092 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e8c10589fc2ceb7680b771c7729eb454acca5cb37d3f97366883c3add5920b63] <==
	I1025 09:12:09.156586       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:12:09.156850       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:12:09.156989       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:12:09.157003       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:12:09.157022       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:12:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:12:09.362794       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:12:09.362838       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:12:09.362854       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:12:09.454440       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:12:09.663463       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:12:09.663492       1 metrics.go:72] Registering metrics
	I1025 09:12:09.663538       1 controller.go:711] "Syncing nftables rules"
	I1025 09:12:19.364186       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:12:19.364272       1 main.go:301] handling current node
	I1025 09:12:29.363545       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:12:29.363590       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9df8d91d80144ed259a6d5e13d80be2fb61c1393693cdd5a11e24297f1abfaf6] <==
	I1025 09:11:58.823725       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1025 09:11:58.824849       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:11:58.825765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:11:58.825863       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:11:58.835089       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:11:58.838376       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:11:59.027289       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:11:59.725353       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:11:59.729192       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:11:59.729209       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:12:00.220927       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:12:00.261765       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:12:00.330831       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:12:00.336999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1025 09:12:00.338102       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:12:00.342706       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:12:00.740537       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:12:01.565937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:12:01.576220       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:12:01.586196       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:12:05.944432       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:12:06.844998       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:12:06.850057       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:12:06.892762       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 09:12:29.871096       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:48624: use of closed network connection
	
	
	==> kube-controller-manager [7a6dafcb4f667b9ec0678e566140454f1c3942a6059b12566f8e5e5418a4631c] <==
	I1025 09:12:05.739241       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:12:05.739259       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:12:05.739270       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:12:05.739277       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:12:05.739340       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-016092"
	I1025 09:12:05.739405       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:12:05.739422       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:12:05.740569       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:12:05.740692       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:12:05.740707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:12:05.740732       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:12:05.740775       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:12:05.740843       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:12:05.740859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:12:05.740891       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:12:05.740914       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:12:05.740914       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:12:05.740691       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:12:05.741122       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:12:05.741221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:12:05.742368       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:12:05.743330       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:12:05.744428       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:12:05.760755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:12:20.741059       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21e0ba5ef4d9900294c6602976c19ea0295aa0247dc6ce55014525373421cc6e] <==
	I1025 09:12:07.330389       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:12:07.399930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:12:07.500439       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:12:07.500469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:12:07.500562       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:12:07.521173       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:12:07.521222       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:12:07.526885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:12:07.527509       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:12:07.527561       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:12:07.529341       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:12:07.529357       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:12:07.529388       1 config.go:200] "Starting service config controller"
	I1025 09:12:07.529748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:12:07.529804       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:12:07.529814       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:12:07.529881       1 config.go:309] "Starting node config controller"
	I1025 09:12:07.529900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:12:07.529907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:12:07.629829       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:12:07.629833       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:12:07.629882       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [aae73454a51312af7676e90932e1991f6fb48c267d124ae0acb6a81e36aa6ebf] <==
	E1025 09:11:58.791026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:11:58.791172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:11:58.791698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:11:58.792126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:11:58.792325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:11:58.792326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:11:58.792391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:11:58.792409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:11:58.792453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:11:58.792488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:11:58.792551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:11:58.792618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:11:58.792930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:11:58.792977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:11:58.793012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:11:58.793022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:11:59.612903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:11:59.629169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:11:59.829539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:11:59.830330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:11:59.900438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:11:59.917275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:11:59.954542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:12:00.023078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1025 09:12:00.389076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:12:02 no-preload-016092 kubelet[2321]: I1025 09:12:02.457039    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-016092" podStartSLOduration=1.457018349 podStartE2EDuration="1.457018349s" podCreationTimestamp="2025-10-25 09:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:02.440065424 +0000 UTC m=+1.124472756" watchObservedRunningTime="2025-10-25 09:12:02.457018349 +0000 UTC m=+1.141425682"
	Oct 25 09:12:02 no-preload-016092 kubelet[2321]: I1025 09:12:02.472287    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-016092" podStartSLOduration=1.472263737 podStartE2EDuration="1.472263737s" podCreationTimestamp="2025-10-25 09:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:02.457434406 +0000 UTC m=+1.141841733" watchObservedRunningTime="2025-10-25 09:12:02.472263737 +0000 UTC m=+1.156671069"
	Oct 25 09:12:02 no-preload-016092 kubelet[2321]: I1025 09:12:02.472419    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-016092" podStartSLOduration=1.47241078 podStartE2EDuration="1.47241078s" podCreationTimestamp="2025-10-25 09:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:02.472254167 +0000 UTC m=+1.156661499" watchObservedRunningTime="2025-10-25 09:12:02.47241078 +0000 UTC m=+1.156818114"
	Oct 25 09:12:02 no-preload-016092 kubelet[2321]: I1025 09:12:02.498771    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-016092" podStartSLOduration=1.498722661 podStartE2EDuration="1.498722661s" podCreationTimestamp="2025-10-25 09:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:02.483825856 +0000 UTC m=+1.168233187" watchObservedRunningTime="2025-10-25 09:12:02.498722661 +0000 UTC m=+1.183129993"
	Oct 25 09:12:05 no-preload-016092 kubelet[2321]: I1025 09:12:05.719042    2321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:12:05 no-preload-016092 kubelet[2321]: I1025 09:12:05.719761    2321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:12:06 no-preload-016092 kubelet[2321]: I1025 09:12:06.921927    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ee7c0f6-4cc5-49c6-a75d-d872e4815e55-lib-modules\") pod \"kindnet-mjnmk\" (UID: \"3ee7c0f6-4cc5-49c6-a75d-d872e4815e55\") " pod="kube-system/kindnet-mjnmk"
	Oct 25 09:12:06 no-preload-016092 kubelet[2321]: I1025 09:12:06.921975    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ee7c0f6-4cc5-49c6-a75d-d872e4815e55-xtables-lock\") pod \"kindnet-mjnmk\" (UID: \"3ee7c0f6-4cc5-49c6-a75d-d872e4815e55\") " pod="kube-system/kindnet-mjnmk"
	Oct 25 09:12:06 no-preload-016092 kubelet[2321]: I1025 09:12:06.921998    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccz7h\" (UniqueName: \"kubernetes.io/projected/3ee7c0f6-4cc5-49c6-a75d-d872e4815e55-kube-api-access-ccz7h\") pod \"kindnet-mjnmk\" (UID: \"3ee7c0f6-4cc5-49c6-a75d-d872e4815e55\") " pod="kube-system/kindnet-mjnmk"
	Oct 25 09:12:06 no-preload-016092 kubelet[2321]: I1025 09:12:06.922022    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3ee7c0f6-4cc5-49c6-a75d-d872e4815e55-cni-cfg\") pod \"kindnet-mjnmk\" (UID: \"3ee7c0f6-4cc5-49c6-a75d-d872e4815e55\") " pod="kube-system/kindnet-mjnmk"
	Oct 25 09:12:07 no-preload-016092 kubelet[2321]: I1025 09:12:07.022266    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstjn\" (UniqueName: \"kubernetes.io/projected/2e6f1992-cab6-4299-b446-192d73e4d08f-kube-api-access-pstjn\") pod \"kube-proxy-h4nh4\" (UID: \"2e6f1992-cab6-4299-b446-192d73e4d08f\") " pod="kube-system/kube-proxy-h4nh4"
	Oct 25 09:12:07 no-preload-016092 kubelet[2321]: I1025 09:12:07.022320    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e6f1992-cab6-4299-b446-192d73e4d08f-lib-modules\") pod \"kube-proxy-h4nh4\" (UID: \"2e6f1992-cab6-4299-b446-192d73e4d08f\") " pod="kube-system/kube-proxy-h4nh4"
	Oct 25 09:12:07 no-preload-016092 kubelet[2321]: I1025 09:12:07.022538    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e6f1992-cab6-4299-b446-192d73e4d08f-kube-proxy\") pod \"kube-proxy-h4nh4\" (UID: \"2e6f1992-cab6-4299-b446-192d73e4d08f\") " pod="kube-system/kube-proxy-h4nh4"
	Oct 25 09:12:07 no-preload-016092 kubelet[2321]: I1025 09:12:07.022591    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e6f1992-cab6-4299-b446-192d73e4d08f-xtables-lock\") pod \"kube-proxy-h4nh4\" (UID: \"2e6f1992-cab6-4299-b446-192d73e4d08f\") " pod="kube-system/kube-proxy-h4nh4"
	Oct 25 09:12:07 no-preload-016092 kubelet[2321]: I1025 09:12:07.440903    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h4nh4" podStartSLOduration=1.440883588 podStartE2EDuration="1.440883588s" podCreationTimestamp="2025-10-25 09:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:07.440785474 +0000 UTC m=+6.125192806" watchObservedRunningTime="2025-10-25 09:12:07.440883588 +0000 UTC m=+6.125290921"
	Oct 25 09:12:09 no-preload-016092 kubelet[2321]: I1025 09:12:09.445358    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mjnmk" podStartSLOduration=1.7262101749999998 podStartE2EDuration="3.445334453s" podCreationTimestamp="2025-10-25 09:12:06 +0000 UTC" firstStartedPulling="2025-10-25 09:12:07.228218633 +0000 UTC m=+5.912625958" lastFinishedPulling="2025-10-25 09:12:08.947342909 +0000 UTC m=+7.631750236" observedRunningTime="2025-10-25 09:12:09.445306793 +0000 UTC m=+8.129714136" watchObservedRunningTime="2025-10-25 09:12:09.445334453 +0000 UTC m=+8.129741785"
	Oct 25 09:12:19 no-preload-016092 kubelet[2321]: I1025 09:12:19.635747    2321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:12:19 no-preload-016092 kubelet[2321]: I1025 09:12:19.716503    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7z6b\" (UniqueName: \"kubernetes.io/projected/c0f58e72-c483-4f5f-8073-ab1db7b48dee-kube-api-access-r7z6b\") pod \"storage-provisioner\" (UID: \"c0f58e72-c483-4f5f-8073-ab1db7b48dee\") " pod="kube-system/storage-provisioner"
	Oct 25 09:12:19 no-preload-016092 kubelet[2321]: I1025 09:12:19.716578    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add063fb-dbe3-4105-a73c-96db0d8a222e-config-volume\") pod \"coredns-66bc5c9577-g85s4\" (UID: \"add063fb-dbe3-4105-a73c-96db0d8a222e\") " pod="kube-system/coredns-66bc5c9577-g85s4"
	Oct 25 09:12:19 no-preload-016092 kubelet[2321]: I1025 09:12:19.716607    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjtp\" (UniqueName: \"kubernetes.io/projected/add063fb-dbe3-4105-a73c-96db0d8a222e-kube-api-access-4cjtp\") pod \"coredns-66bc5c9577-g85s4\" (UID: \"add063fb-dbe3-4105-a73c-96db0d8a222e\") " pod="kube-system/coredns-66bc5c9577-g85s4"
	Oct 25 09:12:19 no-preload-016092 kubelet[2321]: I1025 09:12:19.716628    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0f58e72-c483-4f5f-8073-ab1db7b48dee-tmp\") pod \"storage-provisioner\" (UID: \"c0f58e72-c483-4f5f-8073-ab1db7b48dee\") " pod="kube-system/storage-provisioner"
	Oct 25 09:12:20 no-preload-016092 kubelet[2321]: I1025 09:12:20.472658    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.472615393 podStartE2EDuration="13.472615393s" podCreationTimestamp="2025-10-25 09:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:20.472272563 +0000 UTC m=+19.156679895" watchObservedRunningTime="2025-10-25 09:12:20.472615393 +0000 UTC m=+19.157022725"
	Oct 25 09:12:20 no-preload-016092 kubelet[2321]: I1025 09:12:20.482785    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g85s4" podStartSLOduration=14.482763779999999 podStartE2EDuration="14.48276378s" podCreationTimestamp="2025-10-25 09:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:12:20.482520209 +0000 UTC m=+19.166927541" watchObservedRunningTime="2025-10-25 09:12:20.48276378 +0000 UTC m=+19.167171112"
	Oct 25 09:12:22 no-preload-016092 kubelet[2321]: I1025 09:12:22.834826    2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262qz\" (UniqueName: \"kubernetes.io/projected/acbe50c4-9fa3-499e-8b25-b374b1be96f9-kube-api-access-262qz\") pod \"busybox\" (UID: \"acbe50c4-9fa3-499e-8b25-b374b1be96f9\") " pod="default/busybox"
	Oct 25 09:12:24 no-preload-016092 kubelet[2321]: I1025 09:12:24.483277    2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.732726872 podStartE2EDuration="2.483257265s" podCreationTimestamp="2025-10-25 09:12:22 +0000 UTC" firstStartedPulling="2025-10-25 09:12:23.104815967 +0000 UTC m=+21.789223285" lastFinishedPulling="2025-10-25 09:12:23.855346364 +0000 UTC m=+22.539753678" observedRunningTime="2025-10-25 09:12:24.483045794 +0000 UTC m=+23.167453126" watchObservedRunningTime="2025-10-25 09:12:24.483257265 +0000 UTC m=+23.167664597"
	
	
	==> storage-provisioner [95c428011c4a8bd2d9477e8c8b26c0d1a5af474a38f14bcdc81f40274aa622ab] <==
	I1025 09:12:20.025132       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:12:20.033199       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:12:20.033257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:12:20.035694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:20.040472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:12:20.040600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:12:20.040745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-016092_77ebbd61-8e68-4cd0-acfe-5da82ec92aec!
	I1025 09:12:20.040745       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed8381ef-ef55-4ab4-b1c1-024372829c5a", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-016092_77ebbd61-8e68-4cd0-acfe-5da82ec92aec became leader
	W1025 09:12:20.046211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:20.048952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:12:20.140980       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-016092_77ebbd61-8e68-4cd0-acfe-5da82ec92aec!
	W1025 09:12:22.052364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:22.056181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:24.059524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:24.063738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:26.066997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:26.071374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:28.074194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:28.079728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:30.083224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:12:30.087239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016092 -n no-preload-016092
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-016092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-959110 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-959110 --alsologtostderr -v=1: exit status 80 (2.173759211s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-959110 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:12:50.278502  243475 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:12:50.278656  243475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:50.278672  243475 out.go:374] Setting ErrFile to fd 2...
	I1025 09:12:50.278678  243475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:50.278894  243475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:12:50.279153  243475 out.go:368] Setting JSON to false
	I1025 09:12:50.279208  243475 mustload.go:65] Loading cluster: old-k8s-version-959110
	I1025 09:12:50.279546  243475 config.go:182] Loaded profile config "old-k8s-version-959110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:12:50.279990  243475 cli_runner.go:164] Run: docker container inspect old-k8s-version-959110 --format={{.State.Status}}
	I1025 09:12:50.299090  243475 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:12:50.299337  243475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:12:50.358975  243475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:12:50.346408697 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:12:50.359565  243475 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-959110 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:12:50.361537  243475 out.go:179] * Pausing node old-k8s-version-959110 ... 
	I1025 09:12:50.362649  243475 host.go:66] Checking if "old-k8s-version-959110" exists ...
	I1025 09:12:50.362889  243475 ssh_runner.go:195] Run: systemctl --version
	I1025 09:12:50.362937  243475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959110
	I1025 09:12:50.381904  243475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/old-k8s-version-959110/id_rsa Username:docker}
	I1025 09:12:50.481937  243475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:50.508360  243475 pause.go:52] kubelet running: true
	I1025 09:12:50.508434  243475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:12:50.663582  243475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:12:50.663713  243475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:12:50.734406  243475 cri.go:89] found id: "cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2"
	I1025 09:12:50.734427  243475 cri.go:89] found id: "5c4336cee788e93d2798340c61454d07bc2f9d4178450948699c196882b1cdc2"
	I1025 09:12:50.734431  243475 cri.go:89] found id: "b1e50f0fc694b59ae881456dd885d9a8507ee279341af760564b5cd9331e4f67"
	I1025 09:12:50.734434  243475 cri.go:89] found id: "b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8"
	I1025 09:12:50.734437  243475 cri.go:89] found id: "7ce0d64f6b1a2dcf89ae23a24090dbd4d59c4c691710a7d02dd7fffa794e02e1"
	I1025 09:12:50.734440  243475 cri.go:89] found id: "e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e"
	I1025 09:12:50.734443  243475 cri.go:89] found id: "9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f"
	I1025 09:12:50.734445  243475 cri.go:89] found id: "3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8"
	I1025 09:12:50.734448  243475 cri.go:89] found id: "7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d"
	I1025 09:12:50.734453  243475 cri.go:89] found id: "8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	I1025 09:12:50.734455  243475 cri.go:89] found id: "ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d"
	I1025 09:12:50.734457  243475 cri.go:89] found id: ""
	I1025 09:12:50.734519  243475 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:12:50.746585  243475 retry.go:31] will retry after 203.734203ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:12:50Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:12:50.951046  243475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:50.964369  243475 pause.go:52] kubelet running: false
	I1025 09:12:50.964443  243475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:12:51.108241  243475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:12:51.108331  243475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:12:51.175463  243475 cri.go:89] found id: "cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2"
	I1025 09:12:51.175509  243475 cri.go:89] found id: "5c4336cee788e93d2798340c61454d07bc2f9d4178450948699c196882b1cdc2"
	I1025 09:12:51.175517  243475 cri.go:89] found id: "b1e50f0fc694b59ae881456dd885d9a8507ee279341af760564b5cd9331e4f67"
	I1025 09:12:51.175521  243475 cri.go:89] found id: "b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8"
	I1025 09:12:51.175526  243475 cri.go:89] found id: "7ce0d64f6b1a2dcf89ae23a24090dbd4d59c4c691710a7d02dd7fffa794e02e1"
	I1025 09:12:51.175530  243475 cri.go:89] found id: "e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e"
	I1025 09:12:51.175533  243475 cri.go:89] found id: "9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f"
	I1025 09:12:51.175535  243475 cri.go:89] found id: "3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8"
	I1025 09:12:51.175537  243475 cri.go:89] found id: "7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d"
	I1025 09:12:51.175543  243475 cri.go:89] found id: "8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	I1025 09:12:51.175545  243475 cri.go:89] found id: "ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d"
	I1025 09:12:51.175547  243475 cri.go:89] found id: ""
	I1025 09:12:51.175585  243475 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:12:51.188188  243475 retry.go:31] will retry after 249.017896ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:12:51Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:12:51.437765  243475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:51.451153  243475 pause.go:52] kubelet running: false
	I1025 09:12:51.451215  243475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:12:51.593538  243475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:12:51.593628  243475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:12:51.661572  243475 cri.go:89] found id: "cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2"
	I1025 09:12:51.661604  243475 cri.go:89] found id: "5c4336cee788e93d2798340c61454d07bc2f9d4178450948699c196882b1cdc2"
	I1025 09:12:51.661609  243475 cri.go:89] found id: "b1e50f0fc694b59ae881456dd885d9a8507ee279341af760564b5cd9331e4f67"
	I1025 09:12:51.661611  243475 cri.go:89] found id: "b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8"
	I1025 09:12:51.661614  243475 cri.go:89] found id: "7ce0d64f6b1a2dcf89ae23a24090dbd4d59c4c691710a7d02dd7fffa794e02e1"
	I1025 09:12:51.661617  243475 cri.go:89] found id: "e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e"
	I1025 09:12:51.661619  243475 cri.go:89] found id: "9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f"
	I1025 09:12:51.661622  243475 cri.go:89] found id: "3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8"
	I1025 09:12:51.661624  243475 cri.go:89] found id: "7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d"
	I1025 09:12:51.661631  243475 cri.go:89] found id: "8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	I1025 09:12:51.661633  243475 cri.go:89] found id: "ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d"
	I1025 09:12:51.661636  243475 cri.go:89] found id: ""
	I1025 09:12:51.661692  243475 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:12:51.673690  243475 retry.go:31] will retry after 459.349605ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:12:51Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:12:52.133376  243475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:52.147380  243475 pause.go:52] kubelet running: false
	I1025 09:12:52.147433  243475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:12:52.297284  243475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:12:52.297405  243475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:12:52.372394  243475 cri.go:89] found id: "cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2"
	I1025 09:12:52.372428  243475 cri.go:89] found id: "5c4336cee788e93d2798340c61454d07bc2f9d4178450948699c196882b1cdc2"
	I1025 09:12:52.372434  243475 cri.go:89] found id: "b1e50f0fc694b59ae881456dd885d9a8507ee279341af760564b5cd9331e4f67"
	I1025 09:12:52.372439  243475 cri.go:89] found id: "b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8"
	I1025 09:12:52.372443  243475 cri.go:89] found id: "7ce0d64f6b1a2dcf89ae23a24090dbd4d59c4c691710a7d02dd7fffa794e02e1"
	I1025 09:12:52.372448  243475 cri.go:89] found id: "e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e"
	I1025 09:12:52.372453  243475 cri.go:89] found id: "9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f"
	I1025 09:12:52.372457  243475 cri.go:89] found id: "3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8"
	I1025 09:12:52.372462  243475 cri.go:89] found id: "7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d"
	I1025 09:12:52.372476  243475 cri.go:89] found id: "8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	I1025 09:12:52.372480  243475 cri.go:89] found id: "ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d"
	I1025 09:12:52.372484  243475 cri.go:89] found id: ""
	I1025 09:12:52.372528  243475 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:12:52.387114  243475 out.go:203] 
	W1025 09:12:52.388341  243475 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:12:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:12:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:12:52.388367  243475 out.go:285] * 
	* 
	W1025 09:12:52.392289  243475 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:12:52.393522  243475 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-959110 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-959110
helpers_test.go:243: (dbg) docker inspect old-k8s-version-959110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e",
	        "Created": "2025-10-25T09:10:32.791597968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 236119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:11:45.668432244Z",
	            "FinishedAt": "2025-10-25T09:11:44.078780373Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/hostname",
	        "HostsPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/hosts",
	        "LogPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e-json.log",
	        "Name": "/old-k8s-version-959110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-959110:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-959110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e",
	                "LowerDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-959110",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-959110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-959110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-959110",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-959110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "892c750d89ec4af38186e74b7e6da119736e6e27c71db1b1020e67b3a0fe8131",
	            "SandboxKey": "/var/run/docker/netns/892c750d89ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-959110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:c8:57:7e:16:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58b5fad6c4ae7f65feaa543d9f157207a68afa3f5da4e8c5604314ac776b104d",
	                    "EndpointID": "0c84694cdeab14e6c5327fad5c01a3740b1b4786dd660e4cf88dbfb361aabd2e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-959110",
	                        "e80032bb8f45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110: exit status 2 (343.266101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-959110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-959110 logs -n 25: (1.171545965s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-851718    │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p cert-options-077936 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-423026                                                                                                                                                                                                                   │ force-systemd-env-423026  │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p running-upgrade-462303                                                                                                                                                                                                                     │ running-upgrade-462303    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-047620    │ jenkins │ v1.32.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ cert-options-077936 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ -p cert-options-077936 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p cert-options-077936                                                                                                                                                                                                                        │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:12:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:12:48.609146  242862 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:12:48.609432  242862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:48.609442  242862 out.go:374] Setting ErrFile to fd 2...
	I1025 09:12:48.609448  242862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:48.609680  242862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:12:48.610131  242862 out.go:368] Setting JSON to false
	I1025 09:12:48.611296  242862 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3317,"bootTime":1761380252,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:12:48.611398  242862 start.go:141] virtualization: kvm guest
	I1025 09:12:48.613493  242862 out.go:179] * [no-preload-016092] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:12:48.614870  242862 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:12:48.614865  242862 notify.go:220] Checking for updates...
	I1025 09:12:48.617843  242862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:12:48.619188  242862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:12:48.620229  242862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:12:48.622036  242862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:12:48.623385  242862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:12:48.625156  242862 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:12:48.625627  242862 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:12:48.650158  242862 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:12:48.650265  242862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:12:48.711050  242862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:12:48.700972191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:12:48.711146  242862 docker.go:318] overlay module found
	I1025 09:12:48.713111  242862 out.go:179] * Using the docker driver based on existing profile
	I1025 09:12:48.714448  242862 start.go:305] selected driver: docker
	I1025 09:12:48.714462  242862 start.go:925] validating driver "docker" against &{Name:no-preload-016092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-016092 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:12:48.714570  242862 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:12:48.715160  242862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:12:48.772764  242862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:12:48.761832089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:12:48.773027  242862 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:12:48.773057  242862 cni.go:84] Creating CNI manager for ""
	I1025 09:12:48.773105  242862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:12:48.773153  242862 start.go:349] cluster config:
	{Name:no-preload-016092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-016092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:12:48.775294  242862 out.go:179] * Starting "no-preload-016092" primary control-plane node in "no-preload-016092" cluster
	I1025 09:12:48.777173  242862 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:12:48.778559  242862 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:12:48.779773  242862 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:12:48.779880  242862 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:12:48.779976  242862 cache.go:107] acquiring lock: {Name:mkab4f6e8d094c924a84f1f437a4c7734b400948 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.779892  242862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/config.json ...
	I1025 09:12:48.780059  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 09:12:48.780073  242862 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.901µs
	I1025 09:12:48.780086  242862 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 09:12:48.780099  242862 cache.go:107] acquiring lock: {Name:mkc480cbe7ddd61d8d49576e9cd44148eca559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780089  242862 cache.go:107] acquiring lock: {Name:mkf12ab013056328f943057f1e57ee96ab4f8693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780121  242862 cache.go:107] acquiring lock: {Name:mkb37c51c6170bf967432d116ba89d9611206758 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780174  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 09:12:48.780196  242862 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 97.467µs
	I1025 09:12:48.780191  242862 cache.go:107] acquiring lock: {Name:mkec4d4dfff9bcb5a2371e218628f16838494a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780210  242862 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 09:12:48.780163  242862 cache.go:107] acquiring lock: {Name:mk26a091ec56a56490d3a1d5ed548e1c597e22e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780237  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 09:12:48.780246  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 09:12:48.780247  242862 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 189.161µs
	I1025 09:12:48.780230  242862 cache.go:107] acquiring lock: {Name:mk8456072ec9ce96b941dcf5f6917885e89a456e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780213  242862 cache.go:107] acquiring lock: {Name:mkd64783ac380767a74f441cfd600a02cae54363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780257  242862 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 67.789µs
	I1025 09:12:48.780264  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 09:12:48.780268  242862 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 09:12:48.780273  242862 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 119.775µs
	I1025 09:12:48.780288  242862 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 09:12:48.780257  242862 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 09:12:48.780214  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 09:12:48.780310  242862 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 199.041µs
	I1025 09:12:48.780321  242862 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 09:12:48.780357  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 09:12:48.780385  242862 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 203.652µs
	I1025 09:12:48.780395  242862 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 09:12:48.780419  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1025 09:12:48.780441  242862 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 275.004µs
	I1025 09:12:48.780462  242862 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 09:12:48.780474  242862 cache.go:87] Successfully saved all images to host disk.
	I1025 09:12:48.802968  242862 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:12:48.802994  242862 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:12:48.803015  242862 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:12:48.803044  242862 start.go:360] acquireMachinesLock for no-preload-016092: {Name:mkf17a28ac8d7251f84e1b69e0d12e40185bba01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.803102  242862 start.go:364] duration metric: took 40.279µs to acquireMachinesLock for "no-preload-016092"
	I1025 09:12:48.803120  242862 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:12:48.803128  242862 fix.go:54] fixHost starting: 
	I1025 09:12:48.803379  242862 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:12:48.822099  242862 fix.go:112] recreateIfNeeded on no-preload-016092: state=Stopped err=<nil>
	W1025 09:12:48.822139  242862 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:12:15 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:15.389921933Z" level=info msg="Created container ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8/kubernetes-dashboard" id=5ca89648-a7dd-4c8c-84fd-4822f412a8fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:15 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:15.390708047Z" level=info msg="Starting container: ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d" id=9bd40d68-f845-4ee8-8f9b-f2873522e67e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:15 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:15.392577201Z" level=info msg="Started container" PID=1732 containerID=ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8/kubernetes-dashboard id=9bd40d68-f845-4ee8-8f9b-f2873522e67e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ed6afac26945336ec0a6a46ba1f4ecdcdb2272e607190c33ad6665261fb5ef3
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.445914682Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8e73e5c7-d1e6-4e53-9298-b90221d20d79 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.446892063Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=311019e4-8b5d-45ae-b8ca-a252e49a841d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.447887348Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2913c577-35d2-4606-9dfd-a53b15408517 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.448039441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.4524219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.452578952Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ab6f10dc8e2353b7537b109d5e9fcdac32893762d415e71182725feaf5cc7cfa/merged/etc/passwd: no such file or directory"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.452609977Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ab6f10dc8e2353b7537b109d5e9fcdac32893762d415e71182725feaf5cc7cfa/merged/etc/group: no such file or directory"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.452902709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.479799277Z" level=info msg="Created container cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2: kube-system/storage-provisioner/storage-provisioner" id=2913c577-35d2-4606-9dfd-a53b15408517 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.48046834Z" level=info msg="Starting container: cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2" id=6c2f2c69-6b8b-43db-949e-5d7340528c60 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.483011157Z" level=info msg="Started container" PID=1755 containerID=cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2 description=kube-system/storage-provisioner/storage-provisioner id=6c2f2c69-6b8b-43db-949e-5d7340528c60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af56cf60bbb7ffeeec03be2311deaa22e9340b7a13d3a0c4430c8baaec11e6fb
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.340461902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=06c41f15-3d69-45ad-b9bf-3b96e1ed45b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.34159848Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=510b40d5-7e0c-42e4-b399-2479b31ae0ed name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.342964323Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper" id=0c7d2f7a-9a2f-4957-a6c0-20e6f1dc5b6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.343119609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.35089826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.351577383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.397018048Z" level=info msg="Created container 8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper" id=0c7d2f7a-9a2f-4957-a6c0-20e6f1dc5b6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.397903683Z" level=info msg="Starting container: 8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907" id=6320af47-ee83-468f-bdc9-c0b6878ef640 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.401155592Z" level=info msg="Started container" PID=1771 containerID=8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper id=6320af47-ee83-468f-bdc9-c0b6878ef640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6481463177e1c6078e92532df4e50c0b3f12efbdf174ce613b690fee32729f8b
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.459743604Z" level=info msg="Removing container: e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec" id=027a0969-8f70-47f5-968e-a51144461b8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.470219916Z" level=info msg="Removed container e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper" id=027a0969-8f70-47f5-968e-a51144461b8a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8c96c2e02063b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   6481463177e1c       dashboard-metrics-scraper-5f989dc9cf-d8nsm       kubernetes-dashboard
	cfe6b32c9a8b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   af56cf60bbb7f       storage-provisioner                              kube-system
	ad126b7780d13       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   5ed6afac26945       kubernetes-dashboard-8694d4445c-fl9k8            kubernetes-dashboard
	3f33402124079       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   2893199be5d3d       busybox                                          default
	5c4336cee788e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   ad98b455dd1ba       coredns-5dd5756b68-wm9rk                         kube-system
	b1e50f0fc694b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   a7f634e6d2828       kindnet-gq9q4                                    kube-system
	b350f37abce9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   af56cf60bbb7f       storage-provisioner                              kube-system
	7ce0d64f6b1a2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   587e01e1c1f9b       kube-proxy-zrfv4                                 kube-system
	e15713036371f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   84f1529554ed5       kube-apiserver-old-k8s-version-959110            kube-system
	9466b431271e2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   a5e37f75f966d       kube-scheduler-old-k8s-version-959110            kube-system
	3f24a504d288f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   294a8997d99d4       kube-controller-manager-old-k8s-version-959110   kube-system
	7dd332f2bf0d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   8fccfe2608083       etcd-old-k8s-version-959110                      kube-system
	
	
	==> coredns [5c4336cee788e93d2798340c61454d07bc2f9d4178450948699c196882b1cdc2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37774 - 32356 "HINFO IN 4025417509953567859.4966161910043322910. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083800148s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-959110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-959110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=old-k8s-version-959110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_10_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:10:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-959110
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:12:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:11:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-959110
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e79815a7-9819-419a-accf-a6b2fbca5bb9
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-wm9rk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-959110                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-gq9q4                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-959110             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-959110    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-zrfv4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-959110             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-d8nsm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fl9k8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-959110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller
	  Normal  NodeReady                99s                kubelet          Node old-k8s-version-959110 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-959110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d] <==
	{"level":"info","ts":"2025-10-25T09:11:53.908425Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:11:53.908434Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:11:53.908827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-10-25T09:11:53.908997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:11:53.909193Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:11:53.909253Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:11:53.911565Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:11:53.911741Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-25T09:11:53.911764Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-25T09:11:53.912881Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:11:53.913009Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:11:55.599592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:11:55.599658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:11:55.599681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-25T09:11:55.599697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.599703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.599711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.599719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.600777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-959110 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:11:55.600815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:11:55.600803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:11:55.600943Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:11:55.600967Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:11:55.601955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:11:55.60217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 09:12:53 up 55 min,  0 user,  load average: 2.83, 3.23, 2.10
	Linux old-k8s-version-959110 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b1e50f0fc694b59ae881456dd885d9a8507ee279341af760564b5cd9331e4f67] <==
	I1025 09:11:58.043980       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:11:58.044520       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:11:58.044722       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:11:58.044742       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:11:58.044762       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:11:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:11:58.247262       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:11:58.247335       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:11:58.247351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:11:58.247499       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:11:58.741685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:11:58.741725       1 metrics.go:72] Registering metrics
	I1025 09:11:58.741801       1 controller.go:711] "Syncing nftables rules"
	I1025 09:12:08.250354       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:08.250413       1 main.go:301] handling current node
	I1025 09:12:18.248758       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:18.248803       1 main.go:301] handling current node
	I1025 09:12:28.247658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:28.247718       1 main.go:301] handling current node
	I1025 09:12:38.247797       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:38.247862       1 main.go:301] handling current node
	I1025 09:12:48.254488       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:48.254521       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e] <==
	I1025 09:11:56.714779       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:11:56.738434       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 09:11:56.738488       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:11:56.738497       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:11:56.738505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:11:56.738525       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:11:56.739008       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:11:56.740476       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:11:56.740489       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:11:56.740830       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:11:56.741722       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:11:56.742011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1025 09:11:56.760355       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:11:56.774978       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:11:57.651461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:11:57.893674       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:11:57.938484       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:11:57.961423       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:11:57.970787       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:11:57.981055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:11:58.033849       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.130.82"}
	I1025 09:11:58.049420       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.233.65"}
	I1025 09:12:09.397426       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:12:09.457191       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:12:09.505896       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8] <==
	I1025 09:12:09.567527       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1025 09:12:09.568669       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1025 09:12:09.599945       1 shared_informer.go:318] Caches are synced for taint
	I1025 09:12:09.600057       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1025 09:12:09.600058       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1025 09:12:09.600122       1 taint_manager.go:211] "Sending events to api server"
	I1025 09:12:09.600184       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-959110"
	I1025 09:12:09.600168       1 event.go:307] "Event occurred" object="old-k8s-version-959110" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller"
	I1025 09:12:09.600275       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1025 09:12:09.603574       1 shared_informer.go:318] Caches are synced for stateful set
	I1025 09:12:09.622946       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:12:09.628032       1 shared_informer.go:318] Caches are synced for daemon sets
	I1025 09:12:09.673392       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:12:09.991822       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:12:09.992930       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:12:09.992958       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:12:12.412833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.845µs"
	I1025 09:12:13.419618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.409µs"
	I1025 09:12:14.422661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.812µs"
	I1025 09:12:15.431263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.694935ms"
	I1025 09:12:15.432151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="161.211µs"
	I1025 09:12:32.471445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.886µs"
	I1025 09:12:36.873488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.67039ms"
	I1025 09:12:36.873675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.833µs"
	I1025 09:12:39.844541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.133µs"
	
	
	==> kube-proxy [7ce0d64f6b1a2dcf89ae23a24090dbd4d59c4c691710a7d02dd7fffa794e02e1] <==
	I1025 09:11:57.842170       1 server_others.go:69] "Using iptables proxy"
	I1025 09:11:57.853141       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1025 09:11:57.885350       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:11:57.889218       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:11:57.889262       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:11:57.889272       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:11:57.889300       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:11:57.889682       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:11:57.890034       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:11:57.890873       1 config.go:188] "Starting service config controller"
	I1025 09:11:57.890969       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:11:57.891044       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:11:57.891080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:11:57.891563       1 config.go:315] "Starting node config controller"
	I1025 09:11:57.891654       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:11:57.991298       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:11:57.991300       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:11:57.992718       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f] <==
	I1025 09:11:54.280066       1 serving.go:348] Generated self-signed cert in-memory
	I1025 09:11:56.730700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 09:11:56.730732       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:11:56.736846       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 09:11:56.736946       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 09:11:56.736948       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:11:56.737171       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:11:56.737074       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 09:11:56.737289       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:11:56.737446       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 09:11:56.739872       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 09:11:56.837632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 09:11:56.837891       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:11:56.847771       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.532891     727 topology_manager.go:215] "Topology Admit Handler" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-d8nsm"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.639868     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8nsm\" (UID: \"7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.639926     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea4be496-b6f5-4cc7-8474-a67d52eee0df-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fl9k8\" (UID: \"ea4be496-b6f5-4cc7-8474-a67d52eee0df\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.639950     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54rh\" (UniqueName: \"kubernetes.io/projected/7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b-kube-api-access-p54rh\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8nsm\" (UID: \"7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.640147     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq8lf\" (UniqueName: \"kubernetes.io/projected/ea4be496-b6f5-4cc7-8474-a67d52eee0df-kube-api-access-tq8lf\") pod \"kubernetes-dashboard-8694d4445c-fl9k8\" (UID: \"ea4be496-b6f5-4cc7-8474-a67d52eee0df\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8"
	Oct 25 09:12:12 old-k8s-version-959110 kubelet[727]: I1025 09:12:12.401518     727 scope.go:117] "RemoveContainer" containerID="26b05f025e258051d3e097c80cbf2f1933cc157a318c78a10932040a2316204c"
	Oct 25 09:12:13 old-k8s-version-959110 kubelet[727]: I1025 09:12:13.405881     727 scope.go:117] "RemoveContainer" containerID="26b05f025e258051d3e097c80cbf2f1933cc157a318c78a10932040a2316204c"
	Oct 25 09:12:13 old-k8s-version-959110 kubelet[727]: I1025 09:12:13.406057     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:13 old-k8s-version-959110 kubelet[727]: E1025 09:12:13.406463     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:14 old-k8s-version-959110 kubelet[727]: I1025 09:12:14.410564     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:14 old-k8s-version-959110 kubelet[727]: E1025 09:12:14.410976     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:15 old-k8s-version-959110 kubelet[727]: I1025 09:12:15.425045     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8" podStartSLOduration=0.930238024 podCreationTimestamp="2025-10-25 09:12:09 +0000 UTC" firstStartedPulling="2025-10-25 09:12:09.855264372 +0000 UTC m=+16.618416574" lastFinishedPulling="2025-10-25 09:12:15.350005768 +0000 UTC m=+22.113157962" observedRunningTime="2025-10-25 09:12:15.424624012 +0000 UTC m=+22.187776224" watchObservedRunningTime="2025-10-25 09:12:15.424979412 +0000 UTC m=+22.188131624"
	Oct 25 09:12:19 old-k8s-version-959110 kubelet[727]: I1025 09:12:19.834585     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:19 old-k8s-version-959110 kubelet[727]: E1025 09:12:19.834861     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:28 old-k8s-version-959110 kubelet[727]: I1025 09:12:28.445413     727 scope.go:117] "RemoveContainer" containerID="b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: I1025 09:12:32.339713     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: I1025 09:12:32.458381     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: I1025 09:12:32.458630     727 scope.go:117] "RemoveContainer" containerID="8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: E1025 09:12:32.459023     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:39 old-k8s-version-959110 kubelet[727]: I1025 09:12:39.834000     727 scope.go:117] "RemoveContainer" containerID="8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	Oct 25 09:12:39 old-k8s-version-959110 kubelet[727]: E1025 09:12:39.834262     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: kubelet.service: Consumed 1.616s CPU time.
	
	
	==> kubernetes-dashboard [ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d] <==
	2025/10/25 09:12:15 Using namespace: kubernetes-dashboard
	2025/10/25 09:12:15 Using in-cluster config to connect to apiserver
	2025/10/25 09:12:15 Using secret token for csrf signing
	2025/10/25 09:12:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:12:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:12:15 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 09:12:15 Generating JWE encryption key
	2025/10/25 09:12:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:12:15 Initializing JWE encryption key from synchronized object
	2025/10/25 09:12:15 Creating in-cluster Sidecar client
	2025/10/25 09:12:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:12:15 Serving insecurely on HTTP port: 9090
	2025/10/25 09:12:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:12:15 Starting overwatch
	
	
	==> storage-provisioner [b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8] <==
	I1025 09:11:57.799430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:12:27.812057       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2] <==
	I1025 09:12:28.494737       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:12:28.502521       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:12:28.502557       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:12:45.901541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:12:45.901637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c422531-d5a2-40fe-8114-48f4769b0181", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-959110_cd7465f1-aa89-4990-b0eb-6b2e56537ca0 became leader
	I1025 09:12:45.901724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959110_cd7465f1-aa89-4990-b0eb-6b2e56537ca0!
	I1025 09:12:46.001994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959110_cd7465f1-aa89-4990-b0eb-6b2e56537ca0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-959110 -n old-k8s-version-959110
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-959110 -n old-k8s-version-959110: exit status 2 (355.558625ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-959110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-959110
helpers_test.go:243: (dbg) docker inspect old-k8s-version-959110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e",
	        "Created": "2025-10-25T09:10:32.791597968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 236119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:11:45.668432244Z",
	            "FinishedAt": "2025-10-25T09:11:44.078780373Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/hostname",
	        "HostsPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/hosts",
	        "LogPath": "/var/lib/docker/containers/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e/e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e-json.log",
	        "Name": "/old-k8s-version-959110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-959110:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-959110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e80032bb8f45b95839b1d7d130a3a5c81003b289b7fa265dbf13f6eaa023c97e",
	                "LowerDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/170cc9811f7dd59b0180e023fcb1c2a201d2ed83c7a3b76c9674ccd573ec700e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-959110",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-959110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-959110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-959110",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-959110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "892c750d89ec4af38186e74b7e6da119736e6e27c71db1b1020e67b3a0fe8131",
	            "SandboxKey": "/var/run/docker/netns/892c750d89ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-959110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:c8:57:7e:16:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58b5fad6c4ae7f65feaa543d9f157207a68afa3f5da4e8c5604314ac776b104d",
	                    "EndpointID": "0c84694cdeab14e6c5327fad5c01a3740b1b4786dd660e4cf88dbfb361aabd2e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-959110",
	                        "e80032bb8f45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110: exit status 2 (354.581541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-959110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-959110 logs -n 25: (1.291648534s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-851718    │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p cert-options-077936 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-423026                                                                                                                                                                                                                   │ force-systemd-env-423026  │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p running-upgrade-462303                                                                                                                                                                                                                     │ running-upgrade-462303    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-047620    │ jenkins │ v1.32.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ cert-options-077936 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ ssh     │ -p cert-options-077936 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ delete  │ -p cert-options-077936                                                                                                                                                                                                                        │ cert-options-077936       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496 │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092         │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110    │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:12:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:12:48.609146  242862 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:12:48.609432  242862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:48.609442  242862 out.go:374] Setting ErrFile to fd 2...
	I1025 09:12:48.609448  242862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:48.609680  242862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:12:48.610131  242862 out.go:368] Setting JSON to false
	I1025 09:12:48.611296  242862 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3317,"bootTime":1761380252,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:12:48.611398  242862 start.go:141] virtualization: kvm guest
	I1025 09:12:48.613493  242862 out.go:179] * [no-preload-016092] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:12:48.614870  242862 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:12:48.614865  242862 notify.go:220] Checking for updates...
	I1025 09:12:48.617843  242862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:12:48.619188  242862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:12:48.620229  242862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:12:48.622036  242862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:12:48.623385  242862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:12:48.625156  242862 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:12:48.625627  242862 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:12:48.650158  242862 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:12:48.650265  242862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:12:48.711050  242862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:12:48.700972191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:12:48.711146  242862 docker.go:318] overlay module found
	I1025 09:12:48.713111  242862 out.go:179] * Using the docker driver based on existing profile
	I1025 09:12:48.714448  242862 start.go:305] selected driver: docker
	I1025 09:12:48.714462  242862 start.go:925] validating driver "docker" against &{Name:no-preload-016092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-016092 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:12:48.714570  242862 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:12:48.715160  242862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:12:48.772764  242862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:12:48.761832089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:12:48.773027  242862 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:12:48.773057  242862 cni.go:84] Creating CNI manager for ""
	I1025 09:12:48.773105  242862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:12:48.773153  242862 start.go:349] cluster config:
	{Name:no-preload-016092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-016092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:12:48.775294  242862 out.go:179] * Starting "no-preload-016092" primary control-plane node in "no-preload-016092" cluster
	I1025 09:12:48.777173  242862 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:12:48.778559  242862 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:12:48.779773  242862 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:12:48.779880  242862 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:12:48.779976  242862 cache.go:107] acquiring lock: {Name:mkab4f6e8d094c924a84f1f437a4c7734b400948 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.779892  242862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/config.json ...
	I1025 09:12:48.780059  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 09:12:48.780073  242862 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.901µs
	I1025 09:12:48.780086  242862 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 09:12:48.780099  242862 cache.go:107] acquiring lock: {Name:mkc480cbe7ddd61d8d49576e9cd44148eca559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780089  242862 cache.go:107] acquiring lock: {Name:mkf12ab013056328f943057f1e57ee96ab4f8693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780121  242862 cache.go:107] acquiring lock: {Name:mkb37c51c6170bf967432d116ba89d9611206758 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780174  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 09:12:48.780196  242862 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 97.467µs
	I1025 09:12:48.780191  242862 cache.go:107] acquiring lock: {Name:mkec4d4dfff9bcb5a2371e218628f16838494a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780210  242862 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 09:12:48.780163  242862 cache.go:107] acquiring lock: {Name:mk26a091ec56a56490d3a1d5ed548e1c597e22e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780237  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 09:12:48.780246  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 09:12:48.780247  242862 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 189.161µs
	I1025 09:12:48.780230  242862 cache.go:107] acquiring lock: {Name:mk8456072ec9ce96b941dcf5f6917885e89a456e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780213  242862 cache.go:107] acquiring lock: {Name:mkd64783ac380767a74f441cfd600a02cae54363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.780257  242862 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 67.789µs
	I1025 09:12:48.780264  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 09:12:48.780268  242862 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 09:12:48.780273  242862 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 119.775µs
	I1025 09:12:48.780288  242862 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 09:12:48.780257  242862 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 09:12:48.780214  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 09:12:48.780310  242862 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 199.041µs
	I1025 09:12:48.780321  242862 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 09:12:48.780357  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 09:12:48.780385  242862 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 203.652µs
	I1025 09:12:48.780395  242862 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 09:12:48.780419  242862 cache.go:115] /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1025 09:12:48.780441  242862 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 275.004µs
	I1025 09:12:48.780462  242862 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21796-5966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 09:12:48.780474  242862 cache.go:87] Successfully saved all images to host disk.
	I1025 09:12:48.802968  242862 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:12:48.802994  242862 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:12:48.803015  242862 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:12:48.803044  242862 start.go:360] acquireMachinesLock for no-preload-016092: {Name:mkf17a28ac8d7251f84e1b69e0d12e40185bba01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:12:48.803102  242862 start.go:364] duration metric: took 40.279µs to acquireMachinesLock for "no-preload-016092"
	I1025 09:12:48.803120  242862 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:12:48.803128  242862 fix.go:54] fixHost starting: 
	I1025 09:12:48.803379  242862 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:12:48.822099  242862 fix.go:112] recreateIfNeeded on no-preload-016092: state=Stopped err=<nil>
	W1025 09:12:48.822139  242862 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:12:48.824278  242862 out.go:252] * Restarting existing docker container for "no-preload-016092" ...
	I1025 09:12:48.824355  242862 cli_runner.go:164] Run: docker start no-preload-016092
	I1025 09:12:49.080503  242862 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:12:49.100563  242862 kic.go:430] container "no-preload-016092" state is running.
	I1025 09:12:49.100945  242862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-016092
	I1025 09:12:49.119663  242862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/config.json ...
	I1025 09:12:49.119916  242862 machine.go:93] provisionDockerMachine start ...
	I1025 09:12:49.119996  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:49.140386  242862 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:49.140626  242862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1025 09:12:49.140653  242862 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:12:49.141317  242862 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42352->127.0.0.1:33069: read: connection reset by peer
	I1025 09:12:52.288941  242862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-016092
	
	I1025 09:12:52.288976  242862 ubuntu.go:182] provisioning hostname "no-preload-016092"
	I1025 09:12:52.289046  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:52.309312  242862 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:52.309629  242862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1025 09:12:52.309693  242862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-016092 && echo "no-preload-016092" | sudo tee /etc/hostname
	I1025 09:12:52.467996  242862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-016092
	
	I1025 09:12:52.468078  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:52.488287  242862 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:52.488557  242862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1025 09:12:52.488576  242862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-016092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-016092/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-016092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:12:52.634762  242862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:12:52.634795  242862 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:12:52.634820  242862 ubuntu.go:190] setting up certificates
	I1025 09:12:52.634833  242862 provision.go:84] configureAuth start
	I1025 09:12:52.634892  242862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-016092
	I1025 09:12:52.655364  242862 provision.go:143] copyHostCerts
	I1025 09:12:52.655441  242862 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:12:52.655457  242862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:12:52.655528  242862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:12:52.655679  242862 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:12:52.655694  242862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:12:52.655728  242862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:12:52.655789  242862 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:12:52.655796  242862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:12:52.655827  242862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:12:52.655874  242862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.no-preload-016092 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-016092]
	I1025 09:12:52.777741  242862 provision.go:177] copyRemoteCerts
	I1025 09:12:52.777805  242862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:12:52.777849  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:52.798627  242862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:52.906069  242862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:12:52.927504  242862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:12:52.947264  242862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:12:52.965438  242862 provision.go:87] duration metric: took 330.59213ms to configureAuth
	I1025 09:12:52.965473  242862 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:12:52.965715  242862 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:12:52.965836  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:52.984286  242862 main.go:141] libmachine: Using SSH client type: native
	I1025 09:12:52.984565  242862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1025 09:12:52.984593  242862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:12:53.311257  242862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:12:53.311293  242862 machine.go:96] duration metric: took 4.191352483s to provisionDockerMachine
	I1025 09:12:53.311305  242862 start.go:293] postStartSetup for "no-preload-016092" (driver="docker")
	I1025 09:12:53.311317  242862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:12:53.311380  242862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:12:53.311432  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:53.332808  242862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:53.435515  242862 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:12:53.439610  242862 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:12:53.439635  242862 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:12:53.439693  242862 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:12:53.439740  242862 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:12:53.439808  242862 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:12:53.439900  242862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:12:53.448509  242862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:12:53.467585  242862 start.go:296] duration metric: took 156.26762ms for postStartSetup
	I1025 09:12:53.467673  242862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:12:53.467730  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:53.488887  242862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:53.587849  242862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:12:53.592749  242862 fix.go:56] duration metric: took 4.789613628s for fixHost
	I1025 09:12:53.592776  242862 start.go:83] releasing machines lock for "no-preload-016092", held for 4.789663622s
	I1025 09:12:53.592847  242862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-016092
	I1025 09:12:53.613355  242862 ssh_runner.go:195] Run: cat /version.json
	I1025 09:12:53.613393  242862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:12:53.613401  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:53.613486  242862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:12:53.635237  242862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:53.635718  242862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:12:53.734304  242862 ssh_runner.go:195] Run: systemctl --version
	I1025 09:12:53.796234  242862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:12:53.834118  242862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:12:53.839464  242862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:12:53.839530  242862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:12:53.849367  242862 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:12:53.849396  242862 start.go:495] detecting cgroup driver to use...
	I1025 09:12:53.849430  242862 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:12:53.849474  242862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:12:53.866953  242862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:12:53.880852  242862 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:12:53.880911  242862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:12:53.897706  242862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:12:53.911249  242862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:12:54.012330  242862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:12:54.104390  242862 docker.go:234] disabling docker service ...
	I1025 09:12:54.104449  242862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:12:54.119832  242862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:12:54.132522  242862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:12:54.222904  242862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:12:54.318323  242862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:12:54.332133  242862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:12:54.347316  242862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:12:54.347384  242862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.358234  242862 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:12:54.358294  242862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.367492  242862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.376850  242862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.387108  242862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:12:54.398190  242862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.408217  242862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.418032  242862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:12:54.428147  242862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:12:54.437437  242862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:12:54.445820  242862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:12:54.542017  242862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:12:54.668112  242862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:12:54.668188  242862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:12:54.673099  242862 start.go:563] Will wait 60s for crictl version
	I1025 09:12:54.673168  242862 ssh_runner.go:195] Run: which crictl
	I1025 09:12:54.678583  242862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:12:54.709100  242862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:12:54.709212  242862 ssh_runner.go:195] Run: crio --version
	I1025 09:12:54.739353  242862 ssh_runner.go:195] Run: crio --version
	I1025 09:12:54.772054  242862 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 25 09:12:15 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:15.389921933Z" level=info msg="Created container ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8/kubernetes-dashboard" id=5ca89648-a7dd-4c8c-84fd-4822f412a8fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:15 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:15.390708047Z" level=info msg="Starting container: ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d" id=9bd40d68-f845-4ee8-8f9b-f2873522e67e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:15 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:15.392577201Z" level=info msg="Started container" PID=1732 containerID=ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8/kubernetes-dashboard id=9bd40d68-f845-4ee8-8f9b-f2873522e67e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ed6afac26945336ec0a6a46ba1f4ecdcdb2272e607190c33ad6665261fb5ef3
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.445914682Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8e73e5c7-d1e6-4e53-9298-b90221d20d79 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.446892063Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=311019e4-8b5d-45ae-b8ca-a252e49a841d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.447887348Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2913c577-35d2-4606-9dfd-a53b15408517 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.448039441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.4524219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.452578952Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ab6f10dc8e2353b7537b109d5e9fcdac32893762d415e71182725feaf5cc7cfa/merged/etc/passwd: no such file or directory"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.452609977Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ab6f10dc8e2353b7537b109d5e9fcdac32893762d415e71182725feaf5cc7cfa/merged/etc/group: no such file or directory"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.452902709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.479799277Z" level=info msg="Created container cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2: kube-system/storage-provisioner/storage-provisioner" id=2913c577-35d2-4606-9dfd-a53b15408517 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.48046834Z" level=info msg="Starting container: cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2" id=6c2f2c69-6b8b-43db-949e-5d7340528c60 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:28 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:28.483011157Z" level=info msg="Started container" PID=1755 containerID=cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2 description=kube-system/storage-provisioner/storage-provisioner id=6c2f2c69-6b8b-43db-949e-5d7340528c60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af56cf60bbb7ffeeec03be2311deaa22e9340b7a13d3a0c4430c8baaec11e6fb
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.340461902Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=06c41f15-3d69-45ad-b9bf-3b96e1ed45b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.34159848Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=510b40d5-7e0c-42e4-b399-2479b31ae0ed name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.342964323Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper" id=0c7d2f7a-9a2f-4957-a6c0-20e6f1dc5b6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.343119609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.35089826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.351577383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.397018048Z" level=info msg="Created container 8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper" id=0c7d2f7a-9a2f-4957-a6c0-20e6f1dc5b6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.397903683Z" level=info msg="Starting container: 8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907" id=6320af47-ee83-468f-bdc9-c0b6878ef640 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.401155592Z" level=info msg="Started container" PID=1771 containerID=8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper id=6320af47-ee83-468f-bdc9-c0b6878ef640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6481463177e1c6078e92532df4e50c0b3f12efbdf174ce613b690fee32729f8b
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.459743604Z" level=info msg="Removing container: e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec" id=027a0969-8f70-47f5-968e-a51144461b8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:12:32 old-k8s-version-959110 crio[568]: time="2025-10-25T09:12:32.470219916Z" level=info msg="Removed container e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm/dashboard-metrics-scraper" id=027a0969-8f70-47f5-968e-a51144461b8a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8c96c2e02063b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   6481463177e1c       dashboard-metrics-scraper-5f989dc9cf-d8nsm       kubernetes-dashboard
	cfe6b32c9a8b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   af56cf60bbb7f       storage-provisioner                              kube-system
	ad126b7780d13       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago       Running             kubernetes-dashboard        0                   5ed6afac26945       kubernetes-dashboard-8694d4445c-fl9k8            kubernetes-dashboard
	3f33402124079       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   2893199be5d3d       busybox                                          default
	5c4336cee788e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   ad98b455dd1ba       coredns-5dd5756b68-wm9rk                         kube-system
	b1e50f0fc694b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   a7f634e6d2828       kindnet-gq9q4                                    kube-system
	b350f37abce9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   af56cf60bbb7f       storage-provisioner                              kube-system
	7ce0d64f6b1a2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   587e01e1c1f9b       kube-proxy-zrfv4                                 kube-system
	e15713036371f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   84f1529554ed5       kube-apiserver-old-k8s-version-959110            kube-system
	9466b431271e2       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   a5e37f75f966d       kube-scheduler-old-k8s-version-959110            kube-system
	3f24a504d288f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   294a8997d99d4       kube-controller-manager-old-k8s-version-959110   kube-system
	7dd332f2bf0d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   8fccfe2608083       etcd-old-k8s-version-959110                      kube-system
	
	
	==> coredns [5c4336cee788e93d2798340c61454d07bc2f9d4178450948699c196882b1cdc2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37774 - 32356 "HINFO IN 4025417509953567859.4966161910043322910. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083800148s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-959110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-959110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=old-k8s-version-959110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_10_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:10:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-959110
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:12:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:12:27 +0000   Sat, 25 Oct 2025 09:11:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-959110
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e79815a7-9819-419a-accf-a6b2fbca5bb9
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-wm9rk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-959110                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-gq9q4                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-959110             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-959110    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-zrfv4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-959110             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-d8nsm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fl9k8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 2m6s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s               kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s               kubelet          Node old-k8s-version-959110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s               kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           114s               node-controller  Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller
	  Normal  NodeReady                101s               kubelet          Node old-k8s-version-959110 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-959110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-959110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [7dd332f2bf0d902a5c1b6207fed896fb2e0bd13cb11ed5aa25e88769cf340c1d] <==
	{"level":"info","ts":"2025-10-25T09:11:53.908425Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:11:53.908434Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:11:53.908827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-10-25T09:11:53.908997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:11:53.909193Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:11:53.909253Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:11:53.911565Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:11:53.911741Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-25T09:11:53.911764Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-25T09:11:53.912881Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:11:53.913009Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:11:55.599592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:11:55.599658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:11:55.599681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-25T09:11:55.599697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.599703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.599711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.599719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-25T09:11:55.600777Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-959110 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:11:55.600815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:11:55.600803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:11:55.600943Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:11:55.600967Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:11:55.601955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:11:55.60217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 09:12:55 up 55 min,  0 user,  load average: 2.83, 3.23, 2.10
	Linux old-k8s-version-959110 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b1e50f0fc694b59ae881456dd885d9a8507ee279341af760564b5cd9331e4f67] <==
	I1025 09:11:58.043980       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:11:58.044520       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:11:58.044722       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:11:58.044742       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:11:58.044762       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:11:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:11:58.247262       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:11:58.247335       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:11:58.247351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:11:58.247499       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:11:58.741685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:11:58.741725       1 metrics.go:72] Registering metrics
	I1025 09:11:58.741801       1 controller.go:711] "Syncing nftables rules"
	I1025 09:12:08.250354       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:08.250413       1 main.go:301] handling current node
	I1025 09:12:18.248758       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:18.248803       1 main.go:301] handling current node
	I1025 09:12:28.247658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:28.247718       1 main.go:301] handling current node
	I1025 09:12:38.247797       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:38.247862       1 main.go:301] handling current node
	I1025 09:12:48.254488       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:12:48.254521       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e15713036371f805b74f2d057e2867132a9b8ed98c416e4d6e43fe9ffa9cbd9e] <==
	I1025 09:11:56.714779       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:11:56.738434       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 09:11:56.738488       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:11:56.738497       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:11:56.738505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:11:56.738525       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:11:56.739008       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:11:56.740476       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:11:56.740489       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:11:56.740830       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:11:56.741722       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:11:56.742011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1025 09:11:56.760355       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:11:56.774978       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:11:57.651461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:11:57.893674       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:11:57.938484       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:11:57.961423       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:11:57.970787       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:11:57.981055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:11:58.033849       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.130.82"}
	I1025 09:11:58.049420       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.233.65"}
	I1025 09:12:09.397426       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:12:09.457191       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:12:09.505896       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3f24a504d288f733fe74c74fa02786888ccd69f7186ec1db7ea9f52d71c6e6a8] <==
	I1025 09:12:09.567527       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1025 09:12:09.568669       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1025 09:12:09.599945       1 shared_informer.go:318] Caches are synced for taint
	I1025 09:12:09.600057       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1025 09:12:09.600058       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1025 09:12:09.600122       1 taint_manager.go:211] "Sending events to api server"
	I1025 09:12:09.600184       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-959110"
	I1025 09:12:09.600168       1 event.go:307] "Event occurred" object="old-k8s-version-959110" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-959110 event: Registered Node old-k8s-version-959110 in Controller"
	I1025 09:12:09.600275       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1025 09:12:09.603574       1 shared_informer.go:318] Caches are synced for stateful set
	I1025 09:12:09.622946       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:12:09.628032       1 shared_informer.go:318] Caches are synced for daemon sets
	I1025 09:12:09.673392       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:12:09.991822       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:12:09.992930       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:12:09.992958       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:12:12.412833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.845µs"
	I1025 09:12:13.419618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.409µs"
	I1025 09:12:14.422661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.812µs"
	I1025 09:12:15.431263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.694935ms"
	I1025 09:12:15.432151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="161.211µs"
	I1025 09:12:32.471445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.886µs"
	I1025 09:12:36.873488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.67039ms"
	I1025 09:12:36.873675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.833µs"
	I1025 09:12:39.844541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.133µs"
	
	
	==> kube-proxy [7ce0d64f6b1a2dcf89ae23a24090dbd4d59c4c691710a7d02dd7fffa794e02e1] <==
	I1025 09:11:57.842170       1 server_others.go:69] "Using iptables proxy"
	I1025 09:11:57.853141       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1025 09:11:57.885350       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:11:57.889218       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:11:57.889262       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:11:57.889272       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:11:57.889300       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:11:57.889682       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:11:57.890034       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:11:57.890873       1 config.go:188] "Starting service config controller"
	I1025 09:11:57.890969       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:11:57.891044       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:11:57.891080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:11:57.891563       1 config.go:315] "Starting node config controller"
	I1025 09:11:57.891654       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:11:57.991298       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:11:57.991300       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:11:57.992718       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9466b431271e21f3a242dc756379276676595e7eb555ed6f14657af03640240f] <==
	I1025 09:11:54.280066       1 serving.go:348] Generated self-signed cert in-memory
	I1025 09:11:56.730700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 09:11:56.730732       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:11:56.736846       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 09:11:56.736946       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 09:11:56.736948       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:11:56.737171       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:11:56.737074       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 09:11:56.737289       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:11:56.737446       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 09:11:56.739872       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 09:11:56.837632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 09:11:56.837891       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:11:56.847771       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.532891     727 topology_manager.go:215] "Topology Admit Handler" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-d8nsm"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.639868     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8nsm\" (UID: \"7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.639926     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea4be496-b6f5-4cc7-8474-a67d52eee0df-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fl9k8\" (UID: \"ea4be496-b6f5-4cc7-8474-a67d52eee0df\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.639950     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54rh\" (UniqueName: \"kubernetes.io/projected/7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b-kube-api-access-p54rh\") pod \"dashboard-metrics-scraper-5f989dc9cf-d8nsm\" (UID: \"7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm"
	Oct 25 09:12:09 old-k8s-version-959110 kubelet[727]: I1025 09:12:09.640147     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq8lf\" (UniqueName: \"kubernetes.io/projected/ea4be496-b6f5-4cc7-8474-a67d52eee0df-kube-api-access-tq8lf\") pod \"kubernetes-dashboard-8694d4445c-fl9k8\" (UID: \"ea4be496-b6f5-4cc7-8474-a67d52eee0df\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8"
	Oct 25 09:12:12 old-k8s-version-959110 kubelet[727]: I1025 09:12:12.401518     727 scope.go:117] "RemoveContainer" containerID="26b05f025e258051d3e097c80cbf2f1933cc157a318c78a10932040a2316204c"
	Oct 25 09:12:13 old-k8s-version-959110 kubelet[727]: I1025 09:12:13.405881     727 scope.go:117] "RemoveContainer" containerID="26b05f025e258051d3e097c80cbf2f1933cc157a318c78a10932040a2316204c"
	Oct 25 09:12:13 old-k8s-version-959110 kubelet[727]: I1025 09:12:13.406057     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:13 old-k8s-version-959110 kubelet[727]: E1025 09:12:13.406463     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:14 old-k8s-version-959110 kubelet[727]: I1025 09:12:14.410564     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:14 old-k8s-version-959110 kubelet[727]: E1025 09:12:14.410976     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:15 old-k8s-version-959110 kubelet[727]: I1025 09:12:15.425045     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fl9k8" podStartSLOduration=0.930238024 podCreationTimestamp="2025-10-25 09:12:09 +0000 UTC" firstStartedPulling="2025-10-25 09:12:09.855264372 +0000 UTC m=+16.618416574" lastFinishedPulling="2025-10-25 09:12:15.350005768 +0000 UTC m=+22.113157962" observedRunningTime="2025-10-25 09:12:15.424624012 +0000 UTC m=+22.187776224" watchObservedRunningTime="2025-10-25 09:12:15.424979412 +0000 UTC m=+22.188131624"
	Oct 25 09:12:19 old-k8s-version-959110 kubelet[727]: I1025 09:12:19.834585     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:19 old-k8s-version-959110 kubelet[727]: E1025 09:12:19.834861     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:28 old-k8s-version-959110 kubelet[727]: I1025 09:12:28.445413     727 scope.go:117] "RemoveContainer" containerID="b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: I1025 09:12:32.339713     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: I1025 09:12:32.458381     727 scope.go:117] "RemoveContainer" containerID="e22410c0cb896d779aa5a8aeb64a4a6178dc2d2d8bf006cf3f2356be5f80e1ec"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: I1025 09:12:32.458630     727 scope.go:117] "RemoveContainer" containerID="8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	Oct 25 09:12:32 old-k8s-version-959110 kubelet[727]: E1025 09:12:32.459023     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:39 old-k8s-version-959110 kubelet[727]: I1025 09:12:39.834000     727 scope.go:117] "RemoveContainer" containerID="8c96c2e02063bf14fc5670c8dfc175eeeaf714fd11a9a67874e15e2169e8b907"
	Oct 25 09:12:39 old-k8s-version-959110 kubelet[727]: E1025 09:12:39.834262     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d8nsm_kubernetes-dashboard(7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d8nsm" podUID="7d7b3fc0-7efa-423b-ad48-ffffe08d3a5b"
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:12:50 old-k8s-version-959110 systemd[1]: kubelet.service: Consumed 1.616s CPU time.
	
	
	==> kubernetes-dashboard [ad126b7780d13faa8711322118105962e9a1de76b0f5cc3fadeb5ae91364ff0d] <==
	2025/10/25 09:12:15 Using namespace: kubernetes-dashboard
	2025/10/25 09:12:15 Using in-cluster config to connect to apiserver
	2025/10/25 09:12:15 Using secret token for csrf signing
	2025/10/25 09:12:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:12:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:12:15 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 09:12:15 Generating JWE encryption key
	2025/10/25 09:12:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:12:15 Initializing JWE encryption key from synchronized object
	2025/10/25 09:12:15 Creating in-cluster Sidecar client
	2025/10/25 09:12:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:12:15 Serving insecurely on HTTP port: 9090
	2025/10/25 09:12:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:12:15 Starting overwatch
	
	
	==> storage-provisioner [b350f37abce9cc54a3c30b0c858de7f44b8228901bdbd411d287b5fb802471c8] <==
	I1025 09:11:57.799430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:12:27.812057       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cfe6b32c9a8b5c4eb39aaecac8aa033c6e399e5d191131486d6c691e802638b2] <==
	I1025 09:12:28.494737       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:12:28.502521       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:12:28.502557       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:12:45.901541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:12:45.901637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c422531-d5a2-40fe-8114-48f4769b0181", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-959110_cd7465f1-aa89-4990-b0eb-6b2e56537ca0 became leader
	I1025 09:12:45.901724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959110_cd7465f1-aa89-4990-b0eb-6b2e56537ca0!
	I1025 09:12:46.001994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959110_cd7465f1-aa89-4990-b0eb-6b2e56537ca0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-959110 -n old-k8s-version-959110
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-959110 -n old-k8s-version-959110: exit status 2 (427.794322ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-959110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-016092 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-016092 --alsologtostderr -v=1: exit status 80 (2.536116933s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-016092 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:13:51.816787  256766 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:13:51.817049  256766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:51.817058  256766 out.go:374] Setting ErrFile to fd 2...
	I1025 09:13:51.817062  256766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:51.817255  256766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:13:51.817495  256766 out.go:368] Setting JSON to false
	I1025 09:13:51.817516  256766 mustload.go:65] Loading cluster: no-preload-016092
	I1025 09:13:51.818339  256766 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:51.819218  256766 cli_runner.go:164] Run: docker container inspect no-preload-016092 --format={{.State.Status}}
	I1025 09:13:51.838452  256766 host.go:66] Checking if "no-preload-016092" exists ...
	I1025 09:13:51.838768  256766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:13:51.898594  256766 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 09:13:51.887172826 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:13:51.899234  256766 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-016092 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:13:51.901257  256766 out.go:179] * Pausing node no-preload-016092 ... 
	I1025 09:13:51.902608  256766 host.go:66] Checking if "no-preload-016092" exists ...
	I1025 09:13:51.903007  256766 ssh_runner.go:195] Run: systemctl --version
	I1025 09:13:51.903056  256766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-016092
	I1025 09:13:51.921852  256766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/no-preload-016092/id_rsa Username:docker}
	I1025 09:13:52.026620  256766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:52.057679  256766 pause.go:52] kubelet running: true
	I1025 09:13:52.057756  256766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:13:52.242519  256766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:13:52.242590  256766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:13:52.311258  256766 cri.go:89] found id: "9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf"
	I1025 09:13:52.311289  256766 cri.go:89] found id: "99317f7c2bffae4d40739f1b3aa6bab2ce12ad89e6c1c3c128a638478a0960af"
	I1025 09:13:52.311296  256766 cri.go:89] found id: "ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0"
	I1025 09:13:52.311301  256766 cri.go:89] found id: "9555087b4a95dd49c3a02af93de2be326ddca27814e2068040e5e19d323de57c"
	I1025 09:13:52.311307  256766 cri.go:89] found id: "51bc04f01d285b33d2ffd2d4857d9986a3d390c118d677a906b8b1b3854fcffe"
	I1025 09:13:52.311312  256766 cri.go:89] found id: "33011a5a64acfce349c374b43be041eef3d52dab4c91a5a31072f67152719323"
	I1025 09:13:52.311316  256766 cri.go:89] found id: "6ac72fdf21daf14e251d8647264ae6703ade9663ba42a5c79cbd7ff91e1f523d"
	I1025 09:13:52.311320  256766 cri.go:89] found id: "023f43058735fc1aa667aba8a40553db5ed69c2c3aa83f526a3647121923840a"
	I1025 09:13:52.311324  256766 cri.go:89] found id: "3e8098e047ed3043a00cc812d78042ae68cad7ea01ba443d06753c58aca09dec"
	I1025 09:13:52.311347  256766 cri.go:89] found id: "48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	I1025 09:13:52.311357  256766 cri.go:89] found id: "9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc"
	I1025 09:13:52.311362  256766 cri.go:89] found id: ""
	I1025 09:13:52.311418  256766 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:13:52.324109  256766 retry.go:31] will retry after 146.801816ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:13:52Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:13:52.471552  256766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:52.484302  256766 pause.go:52] kubelet running: false
	I1025 09:13:52.484382  256766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:13:52.651625  256766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:13:52.651766  256766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:13:52.720877  256766 cri.go:89] found id: "9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf"
	I1025 09:13:52.720905  256766 cri.go:89] found id: "99317f7c2bffae4d40739f1b3aa6bab2ce12ad89e6c1c3c128a638478a0960af"
	I1025 09:13:52.720911  256766 cri.go:89] found id: "ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0"
	I1025 09:13:52.720915  256766 cri.go:89] found id: "9555087b4a95dd49c3a02af93de2be326ddca27814e2068040e5e19d323de57c"
	I1025 09:13:52.720919  256766 cri.go:89] found id: "51bc04f01d285b33d2ffd2d4857d9986a3d390c118d677a906b8b1b3854fcffe"
	I1025 09:13:52.720923  256766 cri.go:89] found id: "33011a5a64acfce349c374b43be041eef3d52dab4c91a5a31072f67152719323"
	I1025 09:13:52.720927  256766 cri.go:89] found id: "6ac72fdf21daf14e251d8647264ae6703ade9663ba42a5c79cbd7ff91e1f523d"
	I1025 09:13:52.720931  256766 cri.go:89] found id: "023f43058735fc1aa667aba8a40553db5ed69c2c3aa83f526a3647121923840a"
	I1025 09:13:52.720935  256766 cri.go:89] found id: "3e8098e047ed3043a00cc812d78042ae68cad7ea01ba443d06753c58aca09dec"
	I1025 09:13:52.720943  256766 cri.go:89] found id: "48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	I1025 09:13:52.720947  256766 cri.go:89] found id: "9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc"
	I1025 09:13:52.720956  256766 cri.go:89] found id: ""
	I1025 09:13:52.720998  256766 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:13:52.733573  256766 retry.go:31] will retry after 377.995344ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:13:52Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:13:53.112294  256766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:53.126158  256766 pause.go:52] kubelet running: false
	I1025 09:13:53.126221  256766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:13:53.279535  256766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:13:53.279653  256766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:13:53.352958  256766 cri.go:89] found id: "9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf"
	I1025 09:13:53.352979  256766 cri.go:89] found id: "99317f7c2bffae4d40739f1b3aa6bab2ce12ad89e6c1c3c128a638478a0960af"
	I1025 09:13:53.352983  256766 cri.go:89] found id: "ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0"
	I1025 09:13:53.352986  256766 cri.go:89] found id: "9555087b4a95dd49c3a02af93de2be326ddca27814e2068040e5e19d323de57c"
	I1025 09:13:53.352989  256766 cri.go:89] found id: "51bc04f01d285b33d2ffd2d4857d9986a3d390c118d677a906b8b1b3854fcffe"
	I1025 09:13:53.352992  256766 cri.go:89] found id: "33011a5a64acfce349c374b43be041eef3d52dab4c91a5a31072f67152719323"
	I1025 09:13:53.352994  256766 cri.go:89] found id: "6ac72fdf21daf14e251d8647264ae6703ade9663ba42a5c79cbd7ff91e1f523d"
	I1025 09:13:53.352997  256766 cri.go:89] found id: "023f43058735fc1aa667aba8a40553db5ed69c2c3aa83f526a3647121923840a"
	I1025 09:13:53.353005  256766 cri.go:89] found id: "3e8098e047ed3043a00cc812d78042ae68cad7ea01ba443d06753c58aca09dec"
	I1025 09:13:53.353011  256766 cri.go:89] found id: "48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	I1025 09:13:53.353015  256766 cri.go:89] found id: "9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc"
	I1025 09:13:53.353019  256766 cri.go:89] found id: ""
	I1025 09:13:53.353063  256766 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:13:53.365734  256766 retry.go:31] will retry after 619.909739ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:13:53Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:13:53.986594  256766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:54.001576  256766 pause.go:52] kubelet running: false
	I1025 09:13:54.001671  256766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:13:54.187334  256766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:13:54.187418  256766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:13:54.261420  256766 cri.go:89] found id: "9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf"
	I1025 09:13:54.261452  256766 cri.go:89] found id: "99317f7c2bffae4d40739f1b3aa6bab2ce12ad89e6c1c3c128a638478a0960af"
	I1025 09:13:54.261458  256766 cri.go:89] found id: "ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0"
	I1025 09:13:54.261462  256766 cri.go:89] found id: "9555087b4a95dd49c3a02af93de2be326ddca27814e2068040e5e19d323de57c"
	I1025 09:13:54.261465  256766 cri.go:89] found id: "51bc04f01d285b33d2ffd2d4857d9986a3d390c118d677a906b8b1b3854fcffe"
	I1025 09:13:54.261469  256766 cri.go:89] found id: "33011a5a64acfce349c374b43be041eef3d52dab4c91a5a31072f67152719323"
	I1025 09:13:54.261473  256766 cri.go:89] found id: "6ac72fdf21daf14e251d8647264ae6703ade9663ba42a5c79cbd7ff91e1f523d"
	I1025 09:13:54.261476  256766 cri.go:89] found id: "023f43058735fc1aa667aba8a40553db5ed69c2c3aa83f526a3647121923840a"
	I1025 09:13:54.261481  256766 cri.go:89] found id: "3e8098e047ed3043a00cc812d78042ae68cad7ea01ba443d06753c58aca09dec"
	I1025 09:13:54.261493  256766 cri.go:89] found id: "48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	I1025 09:13:54.261498  256766 cri.go:89] found id: "9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc"
	I1025 09:13:54.261501  256766 cri.go:89] found id: ""
	I1025 09:13:54.261549  256766 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:13:54.283162  256766 out.go:203] 
	W1025 09:13:54.284722  256766 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:13:54.284749  256766 out.go:285] * 
	* 
	W1025 09:13:54.291328  256766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:13:54.292848  256766 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-016092 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-016092
helpers_test.go:243: (dbg) docker inspect no-preload-016092:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3",
	        "Created": "2025-10-25T09:11:34.405672193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243062,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:12:48.852001019Z",
	            "FinishedAt": "2025-10-25T09:12:47.964995377Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/hosts",
	        "LogPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3-json.log",
	        "Name": "/no-preload-016092",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-016092:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-016092",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3",
	                "LowerDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-016092",
	                "Source": "/var/lib/docker/volumes/no-preload-016092/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-016092",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-016092",
	                "name.minikube.sigs.k8s.io": "no-preload-016092",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aab68c228a8f51e7c21b2d0c0d329bf63c474dcf12d3b92ff76d77930b99807c",
	            "SandboxKey": "/var/run/docker/netns/aab68c228a8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-016092": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:f0:5f:c1:31:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad973ee26d09cd8afb8873a923280f5e7c7740cd39b31b1cbf19d4d13b83d6e9",
	                    "EndpointID": "95948dffc13a50583b5652c4646a84c47eaf30e7f2a8232cce66de9733098045",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-016092",
	                        "242e1782ecdc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092: exit status 2 (342.072059ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-016092 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-016092 logs -n 25: (1.263787562s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-047620       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:13 UTC │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:13:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:13:28.612634  253344 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:13:28.612923  253344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:28.612933  253344 out.go:374] Setting ErrFile to fd 2...
	I1025 09:13:28.612938  253344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:28.613208  253344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:13:28.613765  253344 out.go:368] Setting JSON to false
	I1025 09:13:28.615028  253344 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3357,"bootTime":1761380252,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:13:28.615178  253344 start.go:141] virtualization: kvm guest
	I1025 09:13:28.616968  253344 out.go:179] * [default-k8s-diff-port-891466] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:13:28.618661  253344 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:13:28.618627  253344 notify.go:220] Checking for updates...
	I1025 09:13:28.621242  253344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:13:28.622560  253344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:13:28.624000  253344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:13:28.625467  253344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:13:28.627009  253344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:13:28.629156  253344 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:28.629302  253344 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:28.629437  253344 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:28.629552  253344 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:13:28.653857  253344 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:13:28.653975  253344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:13:28.712581  253344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:13:28.701437352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:13:28.712716  253344 docker.go:318] overlay module found
	I1025 09:13:28.714509  253344 out.go:179] * Using the docker driver based on user configuration
	I1025 09:13:28.715778  253344 start.go:305] selected driver: docker
	I1025 09:13:28.715798  253344 start.go:925] validating driver "docker" against <nil>
	I1025 09:13:28.715809  253344 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:13:28.716349  253344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:13:28.775607  253344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:13:28.764937778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:13:28.775823  253344 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:13:28.776015  253344 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:13:28.777836  253344 out.go:179] * Using Docker driver with root privileges
	I1025 09:13:28.779224  253344 cni.go:84] Creating CNI manager for ""
	I1025 09:13:28.779295  253344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:13:28.779307  253344 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:13:28.779376  253344 start.go:349] cluster config:
	{Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:13:28.780778  253344 out.go:179] * Starting "default-k8s-diff-port-891466" primary control-plane node in "default-k8s-diff-port-891466" cluster
	I1025 09:13:28.781933  253344 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:13:28.783248  253344 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:13:28.784599  253344 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:13:28.784671  253344 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:13:28.784694  253344 cache.go:58] Caching tarball of preloaded images
	I1025 09:13:28.784700  253344 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:13:28.784795  253344 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:13:28.784812  253344 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:13:28.784903  253344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/config.json ...
	I1025 09:13:28.784925  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/config.json: {Name:mk3880c3b0ab49643a06cf82efa08e2ab5917cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:28.808126  253344 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:13:28.808147  253344 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:13:28.808162  253344 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:13:28.808190  253344 start.go:360] acquireMachinesLock for default-k8s-diff-port-891466: {Name:mke06babecb9ce5542f3c73a3ce93e6aca9a1c40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:13:28.808282  253344 start.go:364] duration metric: took 76.578µs to acquireMachinesLock for "default-k8s-diff-port-891466"
	I1025 09:13:28.808304  253344 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:13:28.808374  253344 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:13:27.537720  247074 addons.go:514] duration metric: took 570.774378ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:13:27.772365  247074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-106968" context rescaled to 1 replicas
	W1025 09:13:29.271032  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:27.859984  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:27.860019  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:30.377709  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:30.378162  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:30.378233  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:30.378304  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:30.415153  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:30.415180  225660 cri.go:89] found id: ""
	I1025 09:13:30.415191  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:30.415253  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:30.419467  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:30.419539  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:30.449268  225660 cri.go:89] found id: ""
	I1025 09:13:30.449292  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.449303  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:30.449310  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:30.449369  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:30.478385  225660 cri.go:89] found id: ""
	I1025 09:13:30.478408  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.478416  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:30.478422  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:30.478477  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:30.511723  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:30.511744  225660 cri.go:89] found id: ""
	I1025 09:13:30.511751  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:30.511799  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:30.516073  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:30.516146  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:30.546036  225660 cri.go:89] found id: ""
	I1025 09:13:30.546059  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.546069  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:30.546076  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:30.546135  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:30.575208  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:30.575236  225660 cri.go:89] found id: ""
	I1025 09:13:30.575245  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:30.575307  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:30.579464  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:30.579540  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:30.611243  225660 cri.go:89] found id: ""
	I1025 09:13:30.611274  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.611285  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:30.611294  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:30.611360  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:30.639765  225660 cri.go:89] found id: ""
	I1025 09:13:30.639795  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.639806  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:30.639817  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:30.639829  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:30.669086  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:30.669125  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:30.724354  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:30.724388  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:30.757723  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:30.757760  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:30.850302  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:30.850360  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:30.865928  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:30.865954  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:30.935487  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:30.935505  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:30.935518  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:30.974924  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:30.974970  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	W1025 09:13:30.595447  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	W1025 09:13:33.094325  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	I1025 09:13:28.811138  253344 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:13:28.811360  253344 start.go:159] libmachine.API.Create for "default-k8s-diff-port-891466" (driver="docker")
	I1025 09:13:28.811389  253344 client.go:168] LocalClient.Create starting
	I1025 09:13:28.811450  253344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:13:28.811486  253344 main.go:141] libmachine: Decoding PEM data...
	I1025 09:13:28.811504  253344 main.go:141] libmachine: Parsing certificate...
	I1025 09:13:28.811567  253344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:13:28.811594  253344 main.go:141] libmachine: Decoding PEM data...
	I1025 09:13:28.811604  253344 main.go:141] libmachine: Parsing certificate...
	I1025 09:13:28.811971  253344 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-891466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:13:28.829900  253344 cli_runner.go:211] docker network inspect default-k8s-diff-port-891466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:13:28.829975  253344 network_create.go:284] running [docker network inspect default-k8s-diff-port-891466] to gather additional debugging logs...
	I1025 09:13:28.829992  253344 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-891466
	W1025 09:13:28.846910  253344 cli_runner.go:211] docker network inspect default-k8s-diff-port-891466 returned with exit code 1
	I1025 09:13:28.846941  253344 network_create.go:287] error running [docker network inspect default-k8s-diff-port-891466]: docker network inspect default-k8s-diff-port-891466: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-891466 not found
	I1025 09:13:28.846957  253344 network_create.go:289] output of [docker network inspect default-k8s-diff-port-891466]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-891466 not found
	
	** /stderr **
	I1025 09:13:28.847060  253344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:13:28.864803  253344 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:13:28.865764  253344 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:13:28.866565  253344 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:13:28.867560  253344 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e53dd0}
	I1025 09:13:28.867588  253344 network_create.go:124] attempt to create docker network default-k8s-diff-port-891466 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 09:13:28.867662  253344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 default-k8s-diff-port-891466
	I1025 09:13:28.931110  253344 network_create.go:108] docker network default-k8s-diff-port-891466 192.168.76.0/24 created
	I1025 09:13:28.931151  253344 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-891466" container
	I1025 09:13:28.931217  253344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:13:28.950678  253344 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-891466 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:13:28.970188  253344 oci.go:103] Successfully created a docker volume default-k8s-diff-port-891466
	I1025 09:13:28.970279  253344 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-891466-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --entrypoint /usr/bin/test -v default-k8s-diff-port-891466:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:13:29.375768  253344 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-891466
	I1025 09:13:29.375827  253344 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:13:29.375853  253344 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:13:29.375934  253344 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-891466:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:13:31.771416  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:33.771765  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:33.527412  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:33.528046  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:33.528104  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:33.528161  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:33.556662  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:33.556690  225660 cri.go:89] found id: ""
	I1025 09:13:33.556700  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:33.556769  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:33.560872  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:33.560968  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:33.590079  225660 cri.go:89] found id: ""
	I1025 09:13:33.590105  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.590114  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:33.590123  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:33.590178  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:33.618754  225660 cri.go:89] found id: ""
	I1025 09:13:33.618781  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.618790  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:33.618796  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:33.618848  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:33.646274  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:33.646298  225660 cri.go:89] found id: ""
	I1025 09:13:33.646315  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:33.646408  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:33.650436  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:33.650507  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:33.679391  225660 cri.go:89] found id: ""
	I1025 09:13:33.679420  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.679438  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:33.679446  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:33.679503  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:33.707730  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:33.707756  225660 cri.go:89] found id: ""
	I1025 09:13:33.707765  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:33.707822  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:33.711941  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:33.712016  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:33.740260  225660 cri.go:89] found id: ""
	I1025 09:13:33.740283  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.740291  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:33.740297  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:33.740353  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:33.767808  225660 cri.go:89] found id: ""
	I1025 09:13:33.767836  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.767844  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:33.767852  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:33.767863  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:33.800102  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:33.800130  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:33.891542  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:33.891574  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:33.909599  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:33.909678  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:33.979672  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:33.979697  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:33.979715  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:34.014457  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:34.014490  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:34.069093  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:34.069131  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:34.104461  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:34.104494  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:36.664742  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:36.665199  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:36.665253  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:36.665303  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:36.694592  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:36.694618  225660 cri.go:89] found id: ""
	I1025 09:13:36.694627  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:36.694706  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:36.698617  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:36.698694  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:36.726736  225660 cri.go:89] found id: ""
	I1025 09:13:36.726768  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.726780  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:36.726787  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:36.726839  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:36.754524  225660 cri.go:89] found id: ""
	I1025 09:13:36.754572  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.754585  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:36.754594  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:36.754673  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:36.781493  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:36.781521  225660 cri.go:89] found id: ""
	I1025 09:13:36.781532  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:36.781596  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:36.785445  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:36.785506  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:36.811760  225660 cri.go:89] found id: ""
	I1025 09:13:36.811791  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.811803  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:36.811812  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:36.811874  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:36.840331  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:36.840352  225660 cri.go:89] found id: ""
	I1025 09:13:36.840360  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:36.840415  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:36.844625  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:36.844709  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:36.875916  225660 cri.go:89] found id: ""
	I1025 09:13:36.875947  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.875959  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:36.875968  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:36.876025  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:36.904849  225660 cri.go:89] found id: ""
	I1025 09:13:36.904878  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.904890  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:36.904901  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:36.904919  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:36.937370  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:36.937402  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:37.029654  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:37.029690  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:37.045767  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:37.045804  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:37.108584  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:37.108601  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:37.108612  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:37.142737  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:37.142769  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:37.198801  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:37.198850  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:37.229771  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:37.229802  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:33.886994  253344 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-891466:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.511015973s)
	I1025 09:13:33.887025  253344 kic.go:203] duration metric: took 4.511169814s to extract preloaded images to volume ...
	W1025 09:13:33.887131  253344 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:13:33.887182  253344 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:13:33.887226  253344 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:13:33.949542  253344 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-891466 --name default-k8s-diff-port-891466 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --network default-k8s-diff-port-891466 --ip 192.168.76.2 --volume default-k8s-diff-port-891466:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:13:34.244093  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Running}}
	I1025 09:13:34.263073  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:34.282841  253344 cli_runner.go:164] Run: docker exec default-k8s-diff-port-891466 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:13:34.328688  253344 oci.go:144] the created container "default-k8s-diff-port-891466" has a running status.
	I1025 09:13:34.328727  253344 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa...
	I1025 09:13:34.798497  253344 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:13:34.825938  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:34.845580  253344 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:13:34.845603  253344 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-891466 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:13:34.896673  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:34.917302  253344 machine.go:93] provisionDockerMachine start ...
	I1025 09:13:34.917413  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:34.939156  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:34.939489  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:34.939508  253344 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:13:35.083872  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-891466
	
	I1025 09:13:35.083902  253344 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-891466"
	I1025 09:13:35.083961  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:35.103684  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:35.103888  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:35.103901  253344 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-891466 && echo "default-k8s-diff-port-891466" | sudo tee /etc/hostname
	I1025 09:13:35.255338  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-891466
	
	I1025 09:13:35.255472  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:35.274259  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:35.274488  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:35.274509  253344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-891466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-891466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-891466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:13:35.418809  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:13:35.418835  253344 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:13:35.418871  253344 ubuntu.go:190] setting up certificates
	I1025 09:13:35.418888  253344 provision.go:84] configureAuth start
	I1025 09:13:35.418954  253344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-891466
	I1025 09:13:35.437913  253344 provision.go:143] copyHostCerts
	I1025 09:13:35.437967  253344 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:13:35.437977  253344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:13:35.438044  253344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:13:35.438138  253344 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:13:35.438146  253344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:13:35.438171  253344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:13:35.438225  253344 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:13:35.438232  253344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:13:35.438254  253344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:13:35.438312  253344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-891466 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-891466 localhost minikube]
	I1025 09:13:36.068834  253344 provision.go:177] copyRemoteCerts
	I1025 09:13:36.068898  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:13:36.068944  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.087633  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.189289  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:13:36.208791  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:13:36.226959  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:13:36.245095  253344 provision.go:87] duration metric: took 826.193227ms to configureAuth
	I1025 09:13:36.245125  253344 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:13:36.245283  253344 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:36.245386  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.265056  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:36.265266  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:36.265283  253344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:13:36.520347  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:13:36.520374  253344 machine.go:96] duration metric: took 1.603039758s to provisionDockerMachine
	I1025 09:13:36.520386  253344 client.go:171] duration metric: took 7.708991923s to LocalClient.Create
	I1025 09:13:36.520409  253344 start.go:167] duration metric: took 7.709048128s to libmachine.API.Create "default-k8s-diff-port-891466"
	I1025 09:13:36.520422  253344 start.go:293] postStartSetup for "default-k8s-diff-port-891466" (driver="docker")
	I1025 09:13:36.520435  253344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:13:36.520500  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:13:36.520546  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.539119  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.641739  253344 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:13:36.645540  253344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:13:36.645577  253344 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:13:36.645591  253344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:13:36.645676  253344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:13:36.645752  253344 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:13:36.645842  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:13:36.654020  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:13:36.675745  253344 start.go:296] duration metric: took 155.310502ms for postStartSetup
	I1025 09:13:36.676071  253344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-891466
	I1025 09:13:36.696385  253344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/config.json ...
	I1025 09:13:36.696748  253344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:13:36.696803  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.715938  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.815045  253344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:13:36.819541  253344 start.go:128] duration metric: took 8.011154809s to createHost
	I1025 09:13:36.819571  253344 start.go:83] releasing machines lock for "default-k8s-diff-port-891466", held for 8.011275909s
	I1025 09:13:36.819658  253344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-891466
	I1025 09:13:36.840827  253344 ssh_runner.go:195] Run: cat /version.json
	I1025 09:13:36.840888  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.840897  253344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:13:36.840988  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.862345  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.862673  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.961996  253344 ssh_runner.go:195] Run: systemctl --version
	I1025 09:13:37.020812  253344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:13:37.058365  253344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:13:37.063428  253344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:13:37.063494  253344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:13:37.093001  253344 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:13:37.093027  253344 start.go:495] detecting cgroup driver to use...
	I1025 09:13:37.093059  253344 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:13:37.093108  253344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:13:37.111322  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:13:37.124229  253344 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:13:37.124301  253344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:13:37.142801  253344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:13:37.162191  253344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:13:37.255758  253344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:13:37.343008  253344 docker.go:234] disabling docker service ...
	I1025 09:13:37.343078  253344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:13:37.363249  253344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:13:37.376353  253344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:13:37.463927  253344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:13:37.548833  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:13:37.561400  253344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:13:37.575902  253344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:13:37.575952  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.586846  253344 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:13:37.586912  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.596834  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.606316  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.615263  253344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:13:37.623614  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.632705  253344 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.647101  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.656455  253344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:13:37.664395  253344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:13:37.672352  253344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:13:37.749344  253344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:13:37.854499  253344 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:13:37.854583  253344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:13:37.858589  253344 start.go:563] Will wait 60s for crictl version
	I1025 09:13:37.858669  253344 ssh_runner.go:195] Run: which crictl
	I1025 09:13:37.862272  253344 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:13:37.887590  253344 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:13:37.887685  253344 ssh_runner.go:195] Run: crio --version
	I1025 09:13:37.916279  253344 ssh_runner.go:195] Run: crio --version
	I1025 09:13:37.946711  253344 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 09:13:35.095298  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	W1025 09:13:37.594143  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	I1025 09:13:38.594604  242862 pod_ready.go:94] pod "coredns-66bc5c9577-g85s4" is "Ready"
	I1025 09:13:38.594632  242862 pod_ready.go:86] duration metric: took 39.505656882s for pod "coredns-66bc5c9577-g85s4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.597158  242862 pod_ready.go:83] waiting for pod "etcd-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.601110  242862 pod_ready.go:94] pod "etcd-no-preload-016092" is "Ready"
	I1025 09:13:38.601133  242862 pod_ready.go:86] duration metric: took 3.949257ms for pod "etcd-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.603088  242862 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.607142  242862 pod_ready.go:94] pod "kube-apiserver-no-preload-016092" is "Ready"
	I1025 09:13:38.607163  242862 pod_ready.go:86] duration metric: took 4.053485ms for pod "kube-apiserver-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.608894  242862 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:37.947913  253344 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-891466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:13:37.965567  253344 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:13:37.969682  253344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:13:37.980473  253344 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:13:37.980579  253344 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:13:37.980632  253344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:13:38.015046  253344 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:13:38.015069  253344 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:13:38.015111  253344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:13:38.042183  253344 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:13:38.042203  253344 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:13:38.042210  253344 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1025 09:13:38.042314  253344 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-891466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:13:38.042403  253344 ssh_runner.go:195] Run: crio config
	I1025 09:13:38.086906  253344 cni.go:84] Creating CNI manager for ""
	I1025 09:13:38.086929  253344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:13:38.086948  253344 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:13:38.086973  253344 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-891466 NodeName:default-k8s-diff-port-891466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:13:38.087101  253344 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-891466"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:13:38.087173  253344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:13:38.096392  253344 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:13:38.096465  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:13:38.104633  253344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:13:38.117560  253344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:13:38.132792  253344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:13:38.145537  253344 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:13:38.149682  253344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:13:38.161340  253344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:13:38.248287  253344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:13:38.271547  253344 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466 for IP: 192.168.76.2
	I1025 09:13:38.271570  253344 certs.go:195] generating shared ca certs ...
	I1025 09:13:38.271591  253344 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:38.271790  253344 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:13:38.271859  253344 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:13:38.271873  253344 certs.go:257] generating profile certs ...
	I1025 09:13:38.271947  253344 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.key
	I1025 09:13:38.271972  253344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.crt with IP's: []
	W1025 09:13:36.271695  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:38.271800  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:38.792761  242862 pod_ready.go:94] pod "kube-controller-manager-no-preload-016092" is "Ready"
	I1025 09:13:38.792792  242862 pod_ready.go:86] duration metric: took 183.877835ms for pod "kube-controller-manager-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.993231  242862 pod_ready.go:83] waiting for pod "kube-proxy-h4nh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.392443  242862 pod_ready.go:94] pod "kube-proxy-h4nh4" is "Ready"
	I1025 09:13:39.392476  242862 pod_ready.go:86] duration metric: took 399.213308ms for pod "kube-proxy-h4nh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.593703  242862 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.994043  242862 pod_ready.go:94] pod "kube-scheduler-no-preload-016092" is "Ready"
	I1025 09:13:39.994076  242862 pod_ready.go:86] duration metric: took 400.339826ms for pod "kube-scheduler-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.994090  242862 pod_ready.go:40] duration metric: took 40.908672919s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:13:40.050250  242862 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:13:40.052464  242862 out.go:179] * Done! kubectl is now configured to use "no-preload-016092" cluster and "default" namespace by default
	I1025 09:13:38.638750  253344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.crt ...
	I1025 09:13:38.638787  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.crt: {Name:mk046a5c8eed99508a2b61f0b40d08593dd03598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:38.639007  253344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.key ...
	I1025 09:13:38.639031  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.key: {Name:mk09cfc5fc2cdd3078df5893e21ea0d1e1d8cd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:38.639168  253344 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9
	I1025 09:13:38.639186  253344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:13:39.031158  253344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9 ...
	I1025 09:13:39.031199  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9: {Name:mk89adc1c30ad279a00647ca0b020e75d01e0a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.031425  253344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9 ...
	I1025 09:13:39.031447  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9: {Name:mk80b618653cd83436b9beec85a76a55cd9f1741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.031557  253344 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt
	I1025 09:13:39.031682  253344 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key
	I1025 09:13:39.031779  253344 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key
	I1025 09:13:39.031804  253344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt with IP's: []
	I1025 09:13:39.540694  253344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt ...
	I1025 09:13:39.540724  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt: {Name:mkd76c037d2fac6ecb6bd6f8576f2d93fe21e890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.540917  253344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key ...
	I1025 09:13:39.540933  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key: {Name:mk0ff69cbc2b3e5dee62041d137d334e168780d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.541208  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:13:39.541258  253344 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:13:39.541274  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:13:39.541317  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:13:39.541364  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:13:39.541401  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:13:39.541454  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:13:39.542238  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:13:39.561823  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:13:39.580182  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:13:39.599721  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:13:39.618068  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:13:39.637573  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:13:39.655327  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:13:39.672859  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:13:39.690359  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:13:39.710387  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:13:39.729222  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:13:39.747742  253344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:13:39.760917  253344 ssh_runner.go:195] Run: openssl version
	I1025 09:13:39.767175  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:13:39.777045  253344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:13:39.781316  253344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:13:39.781383  253344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:13:39.827835  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:13:39.838008  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:13:39.847162  253344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:13:39.851876  253344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:13:39.851949  253344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:13:39.899767  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:13:39.909467  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:13:39.920221  253344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:13:39.925285  253344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:13:39.925353  253344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:13:39.964239  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:13:39.974234  253344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:13:39.979127  253344 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:13:39.979196  253344 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:13:39.979288  253344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:13:39.979359  253344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:13:40.011969  253344 cri.go:89] found id: ""
	I1025 09:13:40.012041  253344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:13:40.023011  253344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:13:40.033291  253344 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:13:40.033349  253344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:13:40.043480  253344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:13:40.043505  253344 kubeadm.go:157] found existing configuration files:
	
	I1025 09:13:40.043558  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1025 09:13:40.053802  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:13:40.053921  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:13:40.062825  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1025 09:13:40.074619  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:13:40.074705  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:13:40.084334  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1025 09:13:40.092394  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:13:40.092472  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:13:40.100758  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1025 09:13:40.109509  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:13:40.109580  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:13:40.117773  253344 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:13:40.166913  253344 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:13:40.166975  253344 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:13:40.194418  253344 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:13:40.194513  253344 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:13:40.194566  253344 kubeadm.go:318] OS: Linux
	I1025 09:13:40.194773  253344 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:13:40.194852  253344 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:13:40.194929  253344 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:13:40.195003  253344 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:13:40.195077  253344 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:13:40.195159  253344 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:13:40.195236  253344 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:13:40.195293  253344 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:13:40.266510  253344 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:13:40.266666  253344 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:13:40.266821  253344 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:13:40.276698  253344 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:13:39.796698  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:39.797133  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:39.797189  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:39.797247  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:39.829473  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:39.829496  225660 cri.go:89] found id: ""
	I1025 09:13:39.829505  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:39.829571  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:39.833965  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:39.834058  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:39.863164  225660 cri.go:89] found id: ""
	I1025 09:13:39.863191  225660 logs.go:282] 0 containers: []
	W1025 09:13:39.863202  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:39.863209  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:39.863266  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:39.896538  225660 cri.go:89] found id: ""
	I1025 09:13:39.896564  225660 logs.go:282] 0 containers: []
	W1025 09:13:39.896574  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:39.896582  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:39.896669  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:39.927160  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:39.927177  225660 cri.go:89] found id: ""
	I1025 09:13:39.927184  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:39.927228  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:39.931003  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:39.931094  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:39.959354  225660 cri.go:89] found id: ""
	I1025 09:13:39.959381  225660 logs.go:282] 0 containers: []
	W1025 09:13:39.959407  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:39.959415  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:39.959469  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:39.989132  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:39.989164  225660 cri.go:89] found id: ""
	I1025 09:13:39.989174  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:39.989229  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:39.993529  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:39.993606  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:40.027970  225660 cri.go:89] found id: ""
	I1025 09:13:40.028003  225660 logs.go:282] 0 containers: []
	W1025 09:13:40.028015  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:40.028023  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:40.028084  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:40.059261  225660 cri.go:89] found id: ""
	I1025 09:13:40.059288  225660 logs.go:282] 0 containers: []
	W1025 09:13:40.059299  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:40.059310  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:40.059328  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:40.078575  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:40.078613  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:40.151226  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:40.151248  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:40.151262  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:40.192832  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:40.192860  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:40.251960  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:40.251992  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:40.283271  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:40.283302  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:40.337699  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:40.337732  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:40.377929  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:40.377959  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:40.280341  253344 out.go:252]   - Generating certificates and keys ...
	I1025 09:13:40.280472  253344 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:13:40.280609  253344 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:13:40.706013  253344 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:13:41.001628  253344 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:13:41.249981  253344 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:13:41.347456  253344 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:13:41.448542  253344 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:13:41.448821  253344 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-891466 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:13:42.018111  253344 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:13:42.018328  253344 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-891466 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:13:42.202146  253344 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:13:42.399186  253344 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:13:42.676858  253344 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:13:42.676927  253344 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:13:43.037169  253344 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:13:43.406779  253344 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:13:44.008980  253344 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:13:44.046618  253344 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:13:44.671406  253344 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:13:44.672034  253344 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:13:44.676269  253344 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 09:13:40.770796  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:42.771402  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:42.983033  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:44.678057  253344 out.go:252]   - Booting up control plane ...
	I1025 09:13:44.678182  253344 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:13:44.678306  253344 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:13:44.679171  253344 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:13:44.693039  253344 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:13:44.693168  253344 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:13:44.700001  253344 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:13:44.701315  253344 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:13:44.701364  253344 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:13:44.802920  253344 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:13:44.803062  253344 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:13:45.303734  253344 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.907174ms
	I1025 09:13:45.308139  253344 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:13:45.308262  253344 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1025 09:13:45.308377  253344 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:13:45.308491  253344 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:13:46.856938  253344 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.548713234s
	I1025 09:13:47.362030  253344 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.053769612s
	I1025 09:13:49.310555  253344 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00236809s
	I1025 09:13:49.323457  253344 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:13:49.336425  253344 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:13:49.347724  253344 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:13:49.348057  253344 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-891466 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:13:49.357897  253344 kubeadm.go:318] [bootstrap-token] Using token: a2dy9q.ohzp7ddafsou5lmk
	W1025 09:13:45.271388  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:47.274137  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:49.771519  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:49.359796  253344 out.go:252]   - Configuring RBAC rules ...
	I1025 09:13:49.359968  253344 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:13:49.362514  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:13:49.368552  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:13:49.372201  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:13:49.374852  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:13:49.377367  253344 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:13:49.717433  253344 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:13:50.134284  253344 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:13:50.716951  253344 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:13:50.717840  253344 kubeadm.go:318] 
	I1025 09:13:50.717903  253344 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:13:50.717911  253344 kubeadm.go:318] 
	I1025 09:13:50.717981  253344 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:13:50.717988  253344 kubeadm.go:318] 
	I1025 09:13:50.718028  253344 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:13:50.718082  253344 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:13:50.718166  253344 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:13:50.718187  253344 kubeadm.go:318] 
	I1025 09:13:50.718248  253344 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:13:50.718257  253344 kubeadm.go:318] 
	I1025 09:13:50.718311  253344 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:13:50.718319  253344 kubeadm.go:318] 
	I1025 09:13:50.718386  253344 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:13:50.718499  253344 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:13:50.718595  253344 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:13:50.718611  253344 kubeadm.go:318] 
	I1025 09:13:50.718749  253344 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:13:50.718819  253344 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:13:50.718825  253344 kubeadm.go:318] 
	I1025 09:13:50.718901  253344 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token a2dy9q.ohzp7ddafsou5lmk \
	I1025 09:13:50.718995  253344 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:13:50.719016  253344 kubeadm.go:318] 	--control-plane 
	I1025 09:13:50.719022  253344 kubeadm.go:318] 
	I1025 09:13:50.719111  253344 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:13:50.719123  253344 kubeadm.go:318] 
	I1025 09:13:50.719240  253344 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token a2dy9q.ohzp7ddafsou5lmk \
	I1025 09:13:50.719362  253344 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:13:50.722786  253344 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:13:50.722911  253344 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:13:50.722940  253344 cni.go:84] Creating CNI manager for ""
	I1025 09:13:50.722953  253344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:13:50.724917  253344 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:13:47.984259  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:13:47.984327  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:47.984407  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:48.011779  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:13:48.011800  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:48.011806  225660 cri.go:89] found id: ""
	I1025 09:13:48.011815  225660 logs.go:282] 2 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:48.011882  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.016325  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.020013  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:48.020089  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:48.047225  225660 cri.go:89] found id: ""
	I1025 09:13:48.047255  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.047265  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:48.047284  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:48.047343  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:48.074288  225660 cri.go:89] found id: ""
	I1025 09:13:48.074315  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.074323  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:48.074330  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:48.074385  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:48.102465  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:48.102493  225660 cri.go:89] found id: ""
	I1025 09:13:48.102502  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:48.102562  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.106812  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:48.106865  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:48.133980  225660 cri.go:89] found id: ""
	I1025 09:13:48.134006  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.134016  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:48.134023  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:48.134126  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:48.161510  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:48.161530  225660 cri.go:89] found id: ""
	I1025 09:13:48.161538  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:48.161594  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.165480  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:48.165544  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:48.193550  225660 cri.go:89] found id: ""
	I1025 09:13:48.193593  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.193603  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:48.193609  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:48.193676  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:48.220314  225660 cri.go:89] found id: ""
	I1025 09:13:48.220353  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.220366  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:48.220386  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:48.220405  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 09:13:50.726217  253344 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:13:50.730935  253344 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:13:50.730957  253344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:13:50.744241  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:13:50.964956  253344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:13:50.965225  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:50.965274  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-891466 minikube.k8s.io/updated_at=2025_10_25T09_13_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=default-k8s-diff-port-891466 minikube.k8s.io/primary=true
	I1025 09:13:51.044672  253344 ops.go:34] apiserver oom_adj: -16
	I1025 09:13:51.044833  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:51.545833  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:52.045873  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:52.545835  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:53.045001  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:53.545347  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 25 09:13:10 no-preload-016092 crio[567]: time="2025-10-25T09:13:10.220527928Z" level=info msg="Created container 9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4/kubernetes-dashboard" id=4dffcfc9-602a-4417-89c5-4633ea54e5fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:10 no-preload-016092 crio[567]: time="2025-10-25T09:13:10.221212261Z" level=info msg="Starting container: 9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc" id=4c0871d9-0b23-4ee6-9755-d8adbd95fe39 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:13:10 no-preload-016092 crio[567]: time="2025-10-25T09:13:10.223282018Z" level=info msg="Started container" PID=1728 containerID=9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4/kubernetes-dashboard id=4c0871d9-0b23-4ee6-9755-d8adbd95fe39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d3e761b6febe0d9e746c5e6ab6eae31fb3c3e60051c9aeb52bbaf9ca2804109
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.203544116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=59592001-4b98-4dc5-a617-39463e1f3ee9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.204696112Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f0fd89c1-9a50-4020-8701-d9b1e8cdc5e8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.206100259Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper" id=ff5125c0-c4d4-4bcd-a5c4-8ef919797977 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.206251613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.212099392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.212591038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.234703047Z" level=info msg="Created container 48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper" id=ff5125c0-c4d4-4bcd-a5c4-8ef919797977 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.235363043Z" level=info msg="Starting container: 48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957" id=10ef94d9-f960-4a82-a44b-7163a47b566f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.237157788Z" level=info msg="Started container" PID=1746 containerID=48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper id=10ef94d9-f960-4a82-a44b-7163a47b566f name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb7e2419735d896306c88d1b65db425c69c65403f67a9fb4a1f1aac8762cf4a5
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.334133678Z" level=info msg="Removing container: 2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749" id=cfeed54d-ec00-437e-a35d-d3a700093dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.345080011Z" level=info msg="Removed container 2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper" id=cfeed54d-ec00-437e-a35d-d3a700093dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.345365493Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a7de27b4-d509-4413-95be-920e7fd16136 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.346449098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=85280be6-38da-49df-ad19-bff4cf22872e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.34758891Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ac8cf5dc-ea82-4547-be69-3825b602fa49 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.347776503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355270442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355510635Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eaa077ef4f275978fef15e106b6388606535b93fb39fe0c599d4c9b1be196eeb/merged/etc/passwd: no such file or directory"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355549712Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eaa077ef4f275978fef15e106b6388606535b93fb39fe0c599d4c9b1be196eeb/merged/etc/group: no such file or directory"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355915213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.379984563Z" level=info msg="Created container 9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf: kube-system/storage-provisioner/storage-provisioner" id=ac8cf5dc-ea82-4547-be69-3825b602fa49 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.380775632Z" level=info msg="Starting container: 9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf" id=95d722b8-07b3-410f-8684-e1326c779f29 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.383074907Z" level=info msg="Started container" PID=1760 containerID=9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf description=kube-system/storage-provisioner/storage-provisioner id=95d722b8-07b3-410f-8684-e1326c779f29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05567045ae6d91743c3ac4c6da11880ab9c59f08c5bc369e54b849ea72b6086b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9bd58a21f5517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   05567045ae6d9       storage-provisioner                          kube-system
	48ee308605e8a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   cb7e2419735d8       dashboard-metrics-scraper-6ffb444bf9-ft5jh   kubernetes-dashboard
	9a3c9cdae69ba       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   4d3e761b6febe       kubernetes-dashboard-855c9754f9-jnwc4        kubernetes-dashboard
	99317f7c2bffa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   8127a224132ce       coredns-66bc5c9577-g85s4                     kube-system
	e14e55c01173c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   cc55acd33cb5a       busybox                                      default
	ffd907d4e4196       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   05567045ae6d9       storage-provisioner                          kube-system
	9555087b4a95d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   bcaa56827f6d7       kindnet-mjnmk                                kube-system
	51bc04f01d285       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   f46a7bf426b16       kube-proxy-h4nh4                             kube-system
	33011a5a64acf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   51350c944ea62       kube-scheduler-no-preload-016092             kube-system
	6ac72fdf21daf       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   f1743fbf06896       kube-controller-manager-no-preload-016092    kube-system
	023f43058735f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   5e38d99bd7225       kube-apiserver-no-preload-016092             kube-system
	3e8098e047ed3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   38439de2eda64       etcd-no-preload-016092                       kube-system
	
	
	==> coredns [99317f7c2bffae4d40739f1b3aa6bab2ce12ad89e6c1c3c128a638478a0960af] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40578 - 37783 "HINFO IN 2295610001495175142.4577198704993816597. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.454777121s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-016092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-016092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=no-preload-016092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_12_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:11:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-016092
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:13:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:12:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-016092
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b1944563-5e07-4c47-8e9f-57e7b42f6bfa
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-g85s4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-016092                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-mjnmk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-016092              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-016092     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-h4nh4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-016092              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ft5jh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jnwc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 56s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node no-preload-016092 event: Registered Node no-preload-016092 in Controller
	  Normal  NodeReady                96s                  kubelet          Node no-preload-016092 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node no-preload-016092 event: Registered Node no-preload-016092 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [3e8098e047ed3043a00cc812d78042ae68cad7ea01ba443d06753c58aca09dec] <==
	{"level":"warn","ts":"2025-10-25T09:12:56.835212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.844705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.853174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.862422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.870387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.878411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.902307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.919846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.928307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.936566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.944020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.952045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.966683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.973993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.983209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:57.036620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47608","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:13:05.376722Z","caller":"traceutil/trace.go:172","msg":"trace[362027365] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"111.615635ms","start":"2025-10-25T09:13:05.265081Z","end":"2025-10-25T09:13:05.376697Z","steps":["trace[362027365] 'process raft request'  (duration: 111.465014ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:13:05.380690Z","caller":"traceutil/trace.go:172","msg":"trace[2064125728] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"113.983239ms","start":"2025-10-25T09:13:05.266687Z","end":"2025-10-25T09:13:05.380670Z","steps":["trace[2064125728] 'process raft request'  (duration: 113.845494ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:13:05.662592Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.004676ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789566310212764 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qfpkf\" mod_revision:597 > success:<request_put:<key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qfpkf\" value_size:1159 >> failure:<request_range:<key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qfpkf\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:13:05.662831Z","caller":"traceutil/trace.go:172","msg":"trace[1653880554] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"278.327307ms","start":"2025-10-25T09:13:05.384479Z","end":"2025-10-25T09:13:05.662807Z","steps":["trace[1653880554] 'process raft request'  (duration: 45.630417ms)","trace[1653880554] 'compare'  (duration: 231.908941ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:13:05.662838Z","caller":"traceutil/trace.go:172","msg":"trace[1122351575] linearizableReadLoop","detail":"{readStateIndex:636; appliedIndex:634; }","duration":"115.138222ms","start":"2025-10-25T09:13:05.547691Z","end":"2025-10-25T09:13:05.662829Z","steps":["trace[1122351575] 'read index received'  (duration: 33.805µs)","trace[1122351575] 'applied index is now lower than readState.Index'  (duration: 115.103881ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:13:05.662931Z","caller":"traceutil/trace.go:172","msg":"trace[1093739187] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"278.439195ms","start":"2025-10-25T09:13:05.384484Z","end":"2025-10-25T09:13:05.662923Z","steps":["trace[1093739187] 'process raft request'  (duration: 278.257127ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:13:05.663133Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.438065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh.1871b109b15f1598\" limit:1 ","response":"range_response_count:1 size:874"}
	{"level":"info","ts":"2025-10-25T09:13:05.663171Z","caller":"traceutil/trace.go:172","msg":"trace[1711968475] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh.1871b109b15f1598; range_end:; response_count:1; response_revision:607; }","duration":"115.487113ms","start":"2025-10-25T09:13:05.547675Z","end":"2025-10-25T09:13:05.663162Z","steps":["trace[1711968475] 'agreement among raft nodes before linearized reading'  (duration: 115.343872ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:13:32.744609Z","caller":"traceutil/trace.go:172","msg":"trace[134763161] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"141.125246ms","start":"2025-10-25T09:13:32.603462Z","end":"2025-10-25T09:13:32.744587Z","steps":["trace[134763161] 'process raft request'  (duration: 79.677967ms)","trace[134763161] 'compare'  (duration: 61.313721ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:13:55 up 56 min,  0 user,  load average: 2.51, 3.13, 2.14
	Linux no-preload-016092 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9555087b4a95dd49c3a02af93de2be326ddca27814e2068040e5e19d323de57c] <==
	I1025 09:12:58.806033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:12:58.806295       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:12:58.806507       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:12:58.806526       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:12:58.806557       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:12:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:12:59.090822       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:12:59.090849       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:12:59.090859       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:12:59.091048       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:12:59.491740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:12:59.491778       1 metrics.go:72] Registering metrics
	I1025 09:12:59.491906       1 controller.go:711] "Syncing nftables rules"
	I1025 09:13:09.090096       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:09.090154       1 main.go:301] handling current node
	I1025 09:13:19.090927       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:19.090997       1 main.go:301] handling current node
	I1025 09:13:29.090016       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:29.090042       1 main.go:301] handling current node
	I1025 09:13:39.090417       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:39.090482       1 main.go:301] handling current node
	I1025 09:13:49.090493       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:49.090533       1 main.go:301] handling current node
	
	
	==> kube-apiserver [023f43058735fc1aa667aba8a40553db5ed69c2c3aa83f526a3647121923840a] <==
	I1025 09:12:57.553839       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:12:57.553853       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:12:57.553859       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:12:57.553865       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:12:57.554069       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:12:57.554081       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:12:57.556758       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:12:57.556825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:12:57.559679       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:12:57.561280       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:12:57.569138       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:12:57.599423       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:12:57.600780       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:12:57.819561       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:12:57.854163       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:12:57.874845       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:12:57.882286       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:12:57.888464       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:12:57.925024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.204.166"}
	I1025 09:12:57.935560       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.183.242"}
	I1025 09:12:58.456884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:13:00.728239       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:13:00.925456       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:13:00.974136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:13:00.974137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6ac72fdf21daf14e251d8647264ae6703ade9663ba42a5c79cbd7ff91e1f523d] <==
	I1025 09:13:00.372196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:13:00.372234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:13:00.372236       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:13:00.372273       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:13:00.372298       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:13:00.372333       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:13:00.372340       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:13:00.372363       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:13:00.372364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:13:00.372545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:13:00.373693       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:13:00.373728       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:13:00.373810       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:13:00.373912       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-016092"
	I1025 09:13:00.373975       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:13:00.376098       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:13:00.376116       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:13:00.376757       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:13:00.376775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:13:00.376784       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:13:00.378480       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:13:00.380435       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:13:00.390701       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:13:00.396975       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:13:00.400268       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [51bc04f01d285b33d2ffd2d4857d9986a3d390c118d677a906b8b1b3854fcffe] <==
	I1025 09:12:58.659060       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:12:58.751423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:12:58.852265       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:12:58.852309       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:12:58.852482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:12:58.875662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:12:58.875721       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:12:58.880877       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:12:58.881235       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:12:58.881322       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:12:58.882946       1 config.go:200] "Starting service config controller"
	I1025 09:12:58.882970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:12:58.882978       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:12:58.882995       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:12:58.882997       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:12:58.883022       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:12:58.883080       1 config.go:309] "Starting node config controller"
	I1025 09:12:58.883102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:12:58.883111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:12:58.983550       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:12:58.983548       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:12:58.983560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33011a5a64acfce349c374b43be041eef3d52dab4c91a5a31072f67152719323] <==
	I1025 09:12:56.216532       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:12:57.473748       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:12:57.473786       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:12:57.473798       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:12:57.473808       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:12:57.527163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:12:57.527279       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:12:57.531311       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:12:57.531356       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:12:57.533608       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:12:57.533694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:12:57.632054       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:13:00 no-preload-016092 kubelet[717]: I1025 09:13:00.922623     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpp9\" (UniqueName: \"kubernetes.io/projected/2d30e5f2-2721-44b1-bd1f-e3da225a334d-kube-api-access-gkpp9\") pod \"kubernetes-dashboard-855c9754f9-jnwc4\" (UID: \"2d30e5f2-2721-44b1-bd1f-e3da225a334d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4"
	Oct 25 09:13:00 no-preload-016092 kubelet[717]: I1025 09:13:00.922662     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2d30e5f2-2721-44b1-bd1f-e3da225a334d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jnwc4\" (UID: \"2d30e5f2-2721-44b1-bd1f-e3da225a334d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4"
	Oct 25 09:13:05 no-preload-016092 kubelet[717]: I1025 09:13:05.260139     717 scope.go:117] "RemoveContainer" containerID="85b26a0323362c5da7d73760586bf2307648046f556e05c42d8ca30f6299375e"
	Oct 25 09:13:06 no-preload-016092 kubelet[717]: I1025 09:13:06.265677     717 scope.go:117] "RemoveContainer" containerID="85b26a0323362c5da7d73760586bf2307648046f556e05c42d8ca30f6299375e"
	Oct 25 09:13:06 no-preload-016092 kubelet[717]: I1025 09:13:06.265806     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:06 no-preload-016092 kubelet[717]: E1025 09:13:06.266098     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:07 no-preload-016092 kubelet[717]: I1025 09:13:07.270070     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:07 no-preload-016092 kubelet[717]: E1025 09:13:07.270261     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:08 no-preload-016092 kubelet[717]: I1025 09:13:08.144953     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:13:11 no-preload-016092 kubelet[717]: I1025 09:13:11.807557     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4" podStartSLOduration=2.919893607 podStartE2EDuration="11.807537083s" podCreationTimestamp="2025-10-25 09:13:00 +0000 UTC" firstStartedPulling="2025-10-25 09:13:01.184168955 +0000 UTC m=+6.079970731" lastFinishedPulling="2025-10-25 09:13:10.071812429 +0000 UTC m=+14.967614207" observedRunningTime="2025-10-25 09:13:10.33504336 +0000 UTC m=+15.230845156" watchObservedRunningTime="2025-10-25 09:13:11.807537083 +0000 UTC m=+16.703338880"
	Oct 25 09:13:12 no-preload-016092 kubelet[717]: I1025 09:13:12.597099     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:12 no-preload-016092 kubelet[717]: E1025 09:13:12.597297     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: I1025 09:13:26.202923     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: I1025 09:13:26.332150     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: I1025 09:13:26.332628     717 scope.go:117] "RemoveContainer" containerID="48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: E1025 09:13:26.333083     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:29 no-preload-016092 kubelet[717]: I1025 09:13:29.344939     717 scope.go:117] "RemoveContainer" containerID="ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0"
	Oct 25 09:13:32 no-preload-016092 kubelet[717]: I1025 09:13:32.597296     717 scope.go:117] "RemoveContainer" containerID="48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	Oct 25 09:13:32 no-preload-016092 kubelet[717]: E1025 09:13:32.597565     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:45 no-preload-016092 kubelet[717]: I1025 09:13:45.204002     717 scope.go:117] "RemoveContainer" containerID="48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	Oct 25 09:13:45 no-preload-016092 kubelet[717]: E1025 09:13:45.204206     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:52 no-preload-016092 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:13:52 no-preload-016092 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:13:52 no-preload-016092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:13:52 no-preload-016092 systemd[1]: kubelet.service: Consumed 1.808s CPU time.
	
	
	==> kubernetes-dashboard [9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc] <==
	2025/10/25 09:13:10 Using namespace: kubernetes-dashboard
	2025/10/25 09:13:10 Using in-cluster config to connect to apiserver
	2025/10/25 09:13:10 Using secret token for csrf signing
	2025/10/25 09:13:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:13:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:13:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:13:10 Generating JWE encryption key
	2025/10/25 09:13:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:13:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:13:10 Initializing JWE encryption key from synchronized object
	2025/10/25 09:13:10 Creating in-cluster Sidecar client
	2025/10/25 09:13:10 Serving insecurely on HTTP port: 9090
	2025/10/25 09:13:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:13:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:13:10 Starting overwatch
	
	
	==> storage-provisioner [9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf] <==
	I1025 09:13:29.398103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:13:29.407415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:13:29.407467       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:13:29.409919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:32.932295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:37.196392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:40.794517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:43.848480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:46.870790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:46.876941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:13:46.877089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:13:46.877258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-016092_2d5f2cd9-5616-46f3-822c-58c6b4f99eca!
	I1025 09:13:46.877257       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed8381ef-ef55-4ab4-b1c1-024372829c5a", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-016092_2d5f2cd9-5616-46f3-822c-58c6b4f99eca became leader
	W1025 09:13:46.880297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:46.884354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:13:46.977930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-016092_2d5f2cd9-5616-46f3-822c-58c6b4f99eca!
	W1025 09:13:48.887573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:48.893544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:50.897252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:50.901974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:52.904790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:52.908731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:54.912118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:54.919225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0] <==
	I1025 09:12:58.619939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:13:28.624016       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016092 -n no-preload-016092
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016092 -n no-preload-016092: exit status 2 (334.436487ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-016092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-016092
helpers_test.go:243: (dbg) docker inspect no-preload-016092:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3",
	        "Created": "2025-10-25T09:11:34.405672193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243062,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:12:48.852001019Z",
	            "FinishedAt": "2025-10-25T09:12:47.964995377Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/hosts",
	        "LogPath": "/var/lib/docker/containers/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3/242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3-json.log",
	        "Name": "/no-preload-016092",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-016092:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-016092",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "242e1782ecdcb8ad1e7e1eb0fe05e4e2e62e6a75be376cca6091d9ffe3ea45d3",
	                "LowerDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae8a065c8382a2942b41fe2321abedfeae9142945385576a89944fd0b26559ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-016092",
	                "Source": "/var/lib/docker/volumes/no-preload-016092/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-016092",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-016092",
	                "name.minikube.sigs.k8s.io": "no-preload-016092",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aab68c228a8f51e7c21b2d0c0d329bf63c474dcf12d3b92ff76d77930b99807c",
	            "SandboxKey": "/var/run/docker/netns/aab68c228a8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-016092": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:f0:5f:c1:31:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ad973ee26d09cd8afb8873a923280f5e7c7740cd39b31b1cbf19d4d13b83d6e9",
	                    "EndpointID": "95948dffc13a50583b5652c4646a84c47eaf30e7f2a8232cce66de9733098045",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-016092",
	                        "242e1782ecdc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092: exit status 2 (325.82351ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-016092 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-016092 logs -n 25: (1.113564056s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ start   │ -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-047620       │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-959110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │                     │
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:13 UTC │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:13:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:13:28.612634  253344 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:13:28.612923  253344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:28.612933  253344 out.go:374] Setting ErrFile to fd 2...
	I1025 09:13:28.612938  253344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:28.613208  253344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:13:28.613765  253344 out.go:368] Setting JSON to false
	I1025 09:13:28.615028  253344 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3357,"bootTime":1761380252,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:13:28.615178  253344 start.go:141] virtualization: kvm guest
	I1025 09:13:28.616968  253344 out.go:179] * [default-k8s-diff-port-891466] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:13:28.618661  253344 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:13:28.618627  253344 notify.go:220] Checking for updates...
	I1025 09:13:28.621242  253344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:13:28.622560  253344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:13:28.624000  253344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:13:28.625467  253344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:13:28.627009  253344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:13:28.629156  253344 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:28.629302  253344 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:28.629437  253344 config.go:182] Loaded profile config "no-preload-016092": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:28.629552  253344 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:13:28.653857  253344 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:13:28.653975  253344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:13:28.712581  253344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:13:28.701437352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:13:28.712716  253344 docker.go:318] overlay module found
	I1025 09:13:28.714509  253344 out.go:179] * Using the docker driver based on user configuration
	I1025 09:13:28.715778  253344 start.go:305] selected driver: docker
	I1025 09:13:28.715798  253344 start.go:925] validating driver "docker" against <nil>
	I1025 09:13:28.715809  253344 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:13:28.716349  253344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:13:28.775607  253344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:13:28.764937778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:13:28.775823  253344 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:13:28.776015  253344 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:13:28.777836  253344 out.go:179] * Using Docker driver with root privileges
	I1025 09:13:28.779224  253344 cni.go:84] Creating CNI manager for ""
	I1025 09:13:28.779295  253344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:13:28.779307  253344 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:13:28.779376  253344 start.go:349] cluster config:
	{Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:13:28.780778  253344 out.go:179] * Starting "default-k8s-diff-port-891466" primary control-plane node in "default-k8s-diff-port-891466" cluster
	I1025 09:13:28.781933  253344 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:13:28.783248  253344 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:13:28.784599  253344 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:13:28.784671  253344 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:13:28.784694  253344 cache.go:58] Caching tarball of preloaded images
	I1025 09:13:28.784700  253344 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:13:28.784795  253344 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:13:28.784812  253344 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:13:28.784903  253344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/config.json ...
	I1025 09:13:28.784925  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/config.json: {Name:mk3880c3b0ab49643a06cf82efa08e2ab5917cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:28.808126  253344 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:13:28.808147  253344 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:13:28.808162  253344 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:13:28.808190  253344 start.go:360] acquireMachinesLock for default-k8s-diff-port-891466: {Name:mke06babecb9ce5542f3c73a3ce93e6aca9a1c40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:13:28.808282  253344 start.go:364] duration metric: took 76.578µs to acquireMachinesLock for "default-k8s-diff-port-891466"
	I1025 09:13:28.808304  253344 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:13:28.808374  253344 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:13:27.537720  247074 addons.go:514] duration metric: took 570.774378ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:13:27.772365  247074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-106968" context rescaled to 1 replicas
	W1025 09:13:29.271032  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:27.859984  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:27.860019  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:30.377709  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:30.378162  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:30.378233  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:30.378304  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:30.415153  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:30.415180  225660 cri.go:89] found id: ""
	I1025 09:13:30.415191  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:30.415253  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:30.419467  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:30.419539  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:30.449268  225660 cri.go:89] found id: ""
	I1025 09:13:30.449292  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.449303  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:30.449310  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:30.449369  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:30.478385  225660 cri.go:89] found id: ""
	I1025 09:13:30.478408  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.478416  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:30.478422  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:30.478477  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:30.511723  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:30.511744  225660 cri.go:89] found id: ""
	I1025 09:13:30.511751  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:30.511799  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:30.516073  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:30.516146  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:30.546036  225660 cri.go:89] found id: ""
	I1025 09:13:30.546059  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.546069  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:30.546076  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:30.546135  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:30.575208  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:30.575236  225660 cri.go:89] found id: ""
	I1025 09:13:30.575245  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:30.575307  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:30.579464  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:30.579540  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:30.611243  225660 cri.go:89] found id: ""
	I1025 09:13:30.611274  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.611285  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:30.611294  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:30.611360  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:30.639765  225660 cri.go:89] found id: ""
	I1025 09:13:30.639795  225660 logs.go:282] 0 containers: []
	W1025 09:13:30.639806  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:30.639817  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:30.639829  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:30.669086  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:30.669125  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:30.724354  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:30.724388  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:30.757723  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:30.757760  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:30.850302  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:30.850360  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:30.865928  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:30.865954  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:30.935487  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:30.935505  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:30.935518  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:30.974924  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:30.974970  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	W1025 09:13:30.595447  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	W1025 09:13:33.094325  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	I1025 09:13:28.811138  253344 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:13:28.811360  253344 start.go:159] libmachine.API.Create for "default-k8s-diff-port-891466" (driver="docker")
	I1025 09:13:28.811389  253344 client.go:168] LocalClient.Create starting
	I1025 09:13:28.811450  253344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:13:28.811486  253344 main.go:141] libmachine: Decoding PEM data...
	I1025 09:13:28.811504  253344 main.go:141] libmachine: Parsing certificate...
	I1025 09:13:28.811567  253344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:13:28.811594  253344 main.go:141] libmachine: Decoding PEM data...
	I1025 09:13:28.811604  253344 main.go:141] libmachine: Parsing certificate...
	I1025 09:13:28.811971  253344 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-891466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:13:28.829900  253344 cli_runner.go:211] docker network inspect default-k8s-diff-port-891466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:13:28.829975  253344 network_create.go:284] running [docker network inspect default-k8s-diff-port-891466] to gather additional debugging logs...
	I1025 09:13:28.829992  253344 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-891466
	W1025 09:13:28.846910  253344 cli_runner.go:211] docker network inspect default-k8s-diff-port-891466 returned with exit code 1
	I1025 09:13:28.846941  253344 network_create.go:287] error running [docker network inspect default-k8s-diff-port-891466]: docker network inspect default-k8s-diff-port-891466: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-891466 not found
	I1025 09:13:28.846957  253344 network_create.go:289] output of [docker network inspect default-k8s-diff-port-891466]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-891466 not found
	
	** /stderr **
	I1025 09:13:28.847060  253344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:13:28.864803  253344 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:13:28.865764  253344 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:13:28.866565  253344 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:13:28.867560  253344 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e53dd0}
	I1025 09:13:28.867588  253344 network_create.go:124] attempt to create docker network default-k8s-diff-port-891466 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 09:13:28.867662  253344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 default-k8s-diff-port-891466
	I1025 09:13:28.931110  253344 network_create.go:108] docker network default-k8s-diff-port-891466 192.168.76.0/24 created
	I1025 09:13:28.931151  253344 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-891466" container
	I1025 09:13:28.931217  253344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:13:28.950678  253344 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-891466 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:13:28.970188  253344 oci.go:103] Successfully created a docker volume default-k8s-diff-port-891466
	I1025 09:13:28.970279  253344 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-891466-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --entrypoint /usr/bin/test -v default-k8s-diff-port-891466:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:13:29.375768  253344 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-891466
	I1025 09:13:29.375827  253344 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:13:29.375853  253344 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:13:29.375934  253344 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-891466:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:13:31.771416  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:33.771765  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:33.527412  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:33.528046  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:33.528104  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:33.528161  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:33.556662  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:33.556690  225660 cri.go:89] found id: ""
	I1025 09:13:33.556700  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:33.556769  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:33.560872  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:33.560968  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:33.590079  225660 cri.go:89] found id: ""
	I1025 09:13:33.590105  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.590114  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:33.590123  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:33.590178  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:33.618754  225660 cri.go:89] found id: ""
	I1025 09:13:33.618781  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.618790  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:33.618796  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:33.618848  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:33.646274  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:33.646298  225660 cri.go:89] found id: ""
	I1025 09:13:33.646315  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:33.646408  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:33.650436  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:33.650507  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:33.679391  225660 cri.go:89] found id: ""
	I1025 09:13:33.679420  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.679438  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:33.679446  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:33.679503  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:33.707730  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:33.707756  225660 cri.go:89] found id: ""
	I1025 09:13:33.707765  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:33.707822  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:33.711941  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:33.712016  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:33.740260  225660 cri.go:89] found id: ""
	I1025 09:13:33.740283  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.740291  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:33.740297  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:33.740353  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:33.767808  225660 cri.go:89] found id: ""
	I1025 09:13:33.767836  225660 logs.go:282] 0 containers: []
	W1025 09:13:33.767844  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:33.767852  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:33.767863  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:33.800102  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:33.800130  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:33.891542  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:33.891574  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:33.909599  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:33.909678  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:33.979672  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:33.979697  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:33.979715  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:34.014457  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:34.014490  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:34.069093  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:34.069131  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:34.104461  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:34.104494  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:36.664742  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:36.665199  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:36.665253  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:36.665303  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:36.694592  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:36.694618  225660 cri.go:89] found id: ""
	I1025 09:13:36.694627  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:36.694706  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:36.698617  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:36.698694  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:36.726736  225660 cri.go:89] found id: ""
	I1025 09:13:36.726768  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.726780  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:36.726787  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:36.726839  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:36.754524  225660 cri.go:89] found id: ""
	I1025 09:13:36.754572  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.754585  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:36.754594  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:36.754673  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:36.781493  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:36.781521  225660 cri.go:89] found id: ""
	I1025 09:13:36.781532  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:36.781596  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:36.785445  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:36.785506  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:36.811760  225660 cri.go:89] found id: ""
	I1025 09:13:36.811791  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.811803  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:36.811812  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:36.811874  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:36.840331  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:36.840352  225660 cri.go:89] found id: ""
	I1025 09:13:36.840360  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:36.840415  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:36.844625  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:36.844709  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:36.875916  225660 cri.go:89] found id: ""
	I1025 09:13:36.875947  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.875959  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:36.875968  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:36.876025  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:36.904849  225660 cri.go:89] found id: ""
	I1025 09:13:36.904878  225660 logs.go:282] 0 containers: []
	W1025 09:13:36.904890  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:36.904901  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:36.904919  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:36.937370  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:36.937402  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:37.029654  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:37.029690  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:37.045767  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:37.045804  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:37.108584  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:37.108601  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:37.108612  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:37.142737  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:37.142769  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:37.198801  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:37.198850  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:37.229771  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:37.229802  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:33.886994  253344 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-891466:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.511015973s)
	I1025 09:13:33.887025  253344 kic.go:203] duration metric: took 4.511169814s to extract preloaded images to volume ...
	W1025 09:13:33.887131  253344 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:13:33.887182  253344 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:13:33.887226  253344 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:13:33.949542  253344 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-891466 --name default-k8s-diff-port-891466 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-891466 --network default-k8s-diff-port-891466 --ip 192.168.76.2 --volume default-k8s-diff-port-891466:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:13:34.244093  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Running}}
	I1025 09:13:34.263073  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:34.282841  253344 cli_runner.go:164] Run: docker exec default-k8s-diff-port-891466 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:13:34.328688  253344 oci.go:144] the created container "default-k8s-diff-port-891466" has a running status.
	I1025 09:13:34.328727  253344 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa...
	I1025 09:13:34.798497  253344 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:13:34.825938  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:34.845580  253344 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:13:34.845603  253344 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-891466 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:13:34.896673  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:34.917302  253344 machine.go:93] provisionDockerMachine start ...
	I1025 09:13:34.917413  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:34.939156  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:34.939489  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:34.939508  253344 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:13:35.083872  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-891466
	
	I1025 09:13:35.083902  253344 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-891466"
	I1025 09:13:35.083961  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:35.103684  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:35.103888  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:35.103901  253344 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-891466 && echo "default-k8s-diff-port-891466" | sudo tee /etc/hostname
	I1025 09:13:35.255338  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-891466
	
	I1025 09:13:35.255472  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:35.274259  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:35.274488  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:35.274509  253344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-891466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-891466/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-891466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:13:35.418809  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:13:35.418835  253344 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:13:35.418871  253344 ubuntu.go:190] setting up certificates
	I1025 09:13:35.418888  253344 provision.go:84] configureAuth start
	I1025 09:13:35.418954  253344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-891466
	I1025 09:13:35.437913  253344 provision.go:143] copyHostCerts
	I1025 09:13:35.437967  253344 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:13:35.437977  253344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:13:35.438044  253344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:13:35.438138  253344 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:13:35.438146  253344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:13:35.438171  253344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:13:35.438225  253344 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:13:35.438232  253344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:13:35.438254  253344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:13:35.438312  253344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-891466 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-891466 localhost minikube]
	I1025 09:13:36.068834  253344 provision.go:177] copyRemoteCerts
	I1025 09:13:36.068898  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:13:36.068944  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.087633  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.189289  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:13:36.208791  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:13:36.226959  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:13:36.245095  253344 provision.go:87] duration metric: took 826.193227ms to configureAuth
	I1025 09:13:36.245125  253344 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:13:36.245283  253344 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:36.245386  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.265056  253344 main.go:141] libmachine: Using SSH client type: native
	I1025 09:13:36.265266  253344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1025 09:13:36.265283  253344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:13:36.520347  253344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:13:36.520374  253344 machine.go:96] duration metric: took 1.603039758s to provisionDockerMachine
	I1025 09:13:36.520386  253344 client.go:171] duration metric: took 7.708991923s to LocalClient.Create
	I1025 09:13:36.520409  253344 start.go:167] duration metric: took 7.709048128s to libmachine.API.Create "default-k8s-diff-port-891466"
	I1025 09:13:36.520422  253344 start.go:293] postStartSetup for "default-k8s-diff-port-891466" (driver="docker")
	I1025 09:13:36.520435  253344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:13:36.520500  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:13:36.520546  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.539119  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.641739  253344 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:13:36.645540  253344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:13:36.645577  253344 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:13:36.645591  253344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:13:36.645676  253344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:13:36.645752  253344 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:13:36.645842  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:13:36.654020  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:13:36.675745  253344 start.go:296] duration metric: took 155.310502ms for postStartSetup
	I1025 09:13:36.676071  253344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-891466
	I1025 09:13:36.696385  253344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/config.json ...
	I1025 09:13:36.696748  253344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:13:36.696803  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.715938  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.815045  253344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:13:36.819541  253344 start.go:128] duration metric: took 8.011154809s to createHost
	I1025 09:13:36.819571  253344 start.go:83] releasing machines lock for "default-k8s-diff-port-891466", held for 8.011275909s
	I1025 09:13:36.819658  253344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-891466
	I1025 09:13:36.840827  253344 ssh_runner.go:195] Run: cat /version.json
	I1025 09:13:36.840888  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.840897  253344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:13:36.840988  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:36.862345  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.862673  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:36.961996  253344 ssh_runner.go:195] Run: systemctl --version
	I1025 09:13:37.020812  253344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:13:37.058365  253344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:13:37.063428  253344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:13:37.063494  253344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:13:37.093001  253344 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:13:37.093027  253344 start.go:495] detecting cgroup driver to use...
	I1025 09:13:37.093059  253344 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:13:37.093108  253344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:13:37.111322  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:13:37.124229  253344 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:13:37.124301  253344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:13:37.142801  253344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:13:37.162191  253344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:13:37.255758  253344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:13:37.343008  253344 docker.go:234] disabling docker service ...
	I1025 09:13:37.343078  253344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:13:37.363249  253344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:13:37.376353  253344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:13:37.463927  253344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:13:37.548833  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:13:37.561400  253344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:13:37.575902  253344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:13:37.575952  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.586846  253344 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:13:37.586912  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.596834  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.606316  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.615263  253344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:13:37.623614  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.632705  253344 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.647101  253344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:13:37.656455  253344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:13:37.664395  253344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:13:37.672352  253344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:13:37.749344  253344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:13:37.854499  253344 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:13:37.854583  253344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:13:37.858589  253344 start.go:563] Will wait 60s for crictl version
	I1025 09:13:37.858669  253344 ssh_runner.go:195] Run: which crictl
	I1025 09:13:37.862272  253344 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:13:37.887590  253344 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:13:37.887685  253344 ssh_runner.go:195] Run: crio --version
	I1025 09:13:37.916279  253344 ssh_runner.go:195] Run: crio --version
	I1025 09:13:37.946711  253344 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 09:13:35.095298  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	W1025 09:13:37.594143  242862 pod_ready.go:104] pod "coredns-66bc5c9577-g85s4" is not "Ready", error: <nil>
	I1025 09:13:38.594604  242862 pod_ready.go:94] pod "coredns-66bc5c9577-g85s4" is "Ready"
	I1025 09:13:38.594632  242862 pod_ready.go:86] duration metric: took 39.505656882s for pod "coredns-66bc5c9577-g85s4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.597158  242862 pod_ready.go:83] waiting for pod "etcd-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.601110  242862 pod_ready.go:94] pod "etcd-no-preload-016092" is "Ready"
	I1025 09:13:38.601133  242862 pod_ready.go:86] duration metric: took 3.949257ms for pod "etcd-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.603088  242862 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.607142  242862 pod_ready.go:94] pod "kube-apiserver-no-preload-016092" is "Ready"
	I1025 09:13:38.607163  242862 pod_ready.go:86] duration metric: took 4.053485ms for pod "kube-apiserver-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.608894  242862 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:37.947913  253344 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-891466 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:13:37.965567  253344 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:13:37.969682  253344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:13:37.980473  253344 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:13:37.980579  253344 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:13:37.980632  253344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:13:38.015046  253344 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:13:38.015069  253344 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:13:38.015111  253344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:13:38.042183  253344 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:13:38.042203  253344 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:13:38.042210  253344 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1025 09:13:38.042314  253344 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-891466 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:13:38.042403  253344 ssh_runner.go:195] Run: crio config
	I1025 09:13:38.086906  253344 cni.go:84] Creating CNI manager for ""
	I1025 09:13:38.086929  253344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:13:38.086948  253344 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:13:38.086973  253344 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-891466 NodeName:default-k8s-diff-port-891466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:13:38.087101  253344 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-891466"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:13:38.087173  253344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:13:38.096392  253344 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:13:38.096465  253344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:13:38.104633  253344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:13:38.117560  253344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:13:38.132792  253344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:13:38.145537  253344 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:13:38.149682  253344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:13:38.161340  253344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:13:38.248287  253344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:13:38.271547  253344 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466 for IP: 192.168.76.2
	I1025 09:13:38.271570  253344 certs.go:195] generating shared ca certs ...
	I1025 09:13:38.271591  253344 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:38.271790  253344 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:13:38.271859  253344 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:13:38.271873  253344 certs.go:257] generating profile certs ...
	I1025 09:13:38.271947  253344 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.key
	I1025 09:13:38.271972  253344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.crt with IP's: []
	W1025 09:13:36.271695  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:38.271800  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:38.792761  242862 pod_ready.go:94] pod "kube-controller-manager-no-preload-016092" is "Ready"
	I1025 09:13:38.792792  242862 pod_ready.go:86] duration metric: took 183.877835ms for pod "kube-controller-manager-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:38.993231  242862 pod_ready.go:83] waiting for pod "kube-proxy-h4nh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.392443  242862 pod_ready.go:94] pod "kube-proxy-h4nh4" is "Ready"
	I1025 09:13:39.392476  242862 pod_ready.go:86] duration metric: took 399.213308ms for pod "kube-proxy-h4nh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.593703  242862 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.994043  242862 pod_ready.go:94] pod "kube-scheduler-no-preload-016092" is "Ready"
	I1025 09:13:39.994076  242862 pod_ready.go:86] duration metric: took 400.339826ms for pod "kube-scheduler-no-preload-016092" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:13:39.994090  242862 pod_ready.go:40] duration metric: took 40.908672919s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:13:40.050250  242862 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:13:40.052464  242862 out.go:179] * Done! kubectl is now configured to use "no-preload-016092" cluster and "default" namespace by default
	I1025 09:13:38.638750  253344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.crt ...
	I1025 09:13:38.638787  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.crt: {Name:mk046a5c8eed99508a2b61f0b40d08593dd03598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:38.639007  253344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.key ...
	I1025 09:13:38.639031  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/client.key: {Name:mk09cfc5fc2cdd3078df5893e21ea0d1e1d8cd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:38.639168  253344 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9
	I1025 09:13:38.639186  253344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:13:39.031158  253344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9 ...
	I1025 09:13:39.031199  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9: {Name:mk89adc1c30ad279a00647ca0b020e75d01e0a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.031425  253344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9 ...
	I1025 09:13:39.031447  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9: {Name:mk80b618653cd83436b9beec85a76a55cd9f1741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.031557  253344 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt.a83659c9 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt
	I1025 09:13:39.031682  253344 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key.a83659c9 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key
	I1025 09:13:39.031779  253344 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key
	I1025 09:13:39.031804  253344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt with IP's: []
	I1025 09:13:39.540694  253344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt ...
	I1025 09:13:39.540724  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt: {Name:mkd76c037d2fac6ecb6bd6f8576f2d93fe21e890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.540917  253344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key ...
	I1025 09:13:39.540933  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key: {Name:mk0ff69cbc2b3e5dee62041d137d334e168780d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:39.541208  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:13:39.541258  253344 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:13:39.541274  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:13:39.541317  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:13:39.541364  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:13:39.541401  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:13:39.541454  253344 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:13:39.542238  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:13:39.561823  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:13:39.580182  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:13:39.599721  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:13:39.618068  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:13:39.637573  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:13:39.655327  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:13:39.672859  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/default-k8s-diff-port-891466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:13:39.690359  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:13:39.710387  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:13:39.729222  253344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:13:39.747742  253344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:13:39.760917  253344 ssh_runner.go:195] Run: openssl version
	I1025 09:13:39.767175  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:13:39.777045  253344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:13:39.781316  253344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:13:39.781383  253344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:13:39.827835  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:13:39.838008  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:13:39.847162  253344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:13:39.851876  253344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:13:39.851949  253344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:13:39.899767  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:13:39.909467  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:13:39.920221  253344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:13:39.925285  253344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:13:39.925353  253344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:13:39.964239  253344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:13:39.974234  253344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:13:39.979127  253344 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:13:39.979196  253344 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-891466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-891466 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:13:39.979288  253344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:13:39.979359  253344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:13:40.011969  253344 cri.go:89] found id: ""
	I1025 09:13:40.012041  253344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:13:40.023011  253344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:13:40.033291  253344 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:13:40.033349  253344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:13:40.043480  253344 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:13:40.043505  253344 kubeadm.go:157] found existing configuration files:
	
	I1025 09:13:40.043558  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1025 09:13:40.053802  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:13:40.053921  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:13:40.062825  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1025 09:13:40.074619  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:13:40.074705  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:13:40.084334  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1025 09:13:40.092394  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:13:40.092472  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:13:40.100758  253344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1025 09:13:40.109509  253344 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:13:40.109580  253344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:13:40.117773  253344 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:13:40.166913  253344 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:13:40.166975  253344 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:13:40.194418  253344 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:13:40.194513  253344 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:13:40.194566  253344 kubeadm.go:318] OS: Linux
	I1025 09:13:40.194773  253344 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:13:40.194852  253344 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:13:40.194929  253344 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:13:40.195003  253344 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:13:40.195077  253344 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:13:40.195159  253344 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:13:40.195236  253344 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:13:40.195293  253344 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:13:40.266510  253344 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:13:40.266666  253344 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:13:40.266821  253344 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:13:40.276698  253344 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:13:39.796698  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:39.797133  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:13:39.797189  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:39.797247  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:39.829473  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:39.829496  225660 cri.go:89] found id: ""
	I1025 09:13:39.829505  225660 logs.go:282] 1 containers: [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:39.829571  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:39.833965  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:39.834058  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:39.863164  225660 cri.go:89] found id: ""
	I1025 09:13:39.863191  225660 logs.go:282] 0 containers: []
	W1025 09:13:39.863202  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:39.863209  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:39.863266  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:39.896538  225660 cri.go:89] found id: ""
	I1025 09:13:39.896564  225660 logs.go:282] 0 containers: []
	W1025 09:13:39.896574  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:39.896582  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:39.896669  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:39.927160  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:39.927177  225660 cri.go:89] found id: ""
	I1025 09:13:39.927184  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:39.927228  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:39.931003  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:39.931094  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:39.959354  225660 cri.go:89] found id: ""
	I1025 09:13:39.959381  225660 logs.go:282] 0 containers: []
	W1025 09:13:39.959407  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:39.959415  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:39.959469  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:39.989132  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:39.989164  225660 cri.go:89] found id: ""
	I1025 09:13:39.989174  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:39.989229  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:39.993529  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:39.993606  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:40.027970  225660 cri.go:89] found id: ""
	I1025 09:13:40.028003  225660 logs.go:282] 0 containers: []
	W1025 09:13:40.028015  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:40.028023  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:40.028084  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:40.059261  225660 cri.go:89] found id: ""
	I1025 09:13:40.059288  225660 logs.go:282] 0 containers: []
	W1025 09:13:40.059299  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:40.059310  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:40.059328  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:13:40.078575  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:40.078613  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:13:40.151226  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:13:40.151248  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:40.151262  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:40.192832  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:40.192860  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:40.251960  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:40.251992  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:40.283271  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:40.283302  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:40.337699  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:40.337732  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:40.377929  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:40.377959  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:40.280341  253344 out.go:252]   - Generating certificates and keys ...
	I1025 09:13:40.280472  253344 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:13:40.280609  253344 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:13:40.706013  253344 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:13:41.001628  253344 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:13:41.249981  253344 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:13:41.347456  253344 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:13:41.448542  253344 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:13:41.448821  253344 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-891466 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:13:42.018111  253344 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:13:42.018328  253344 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-891466 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:13:42.202146  253344 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:13:42.399186  253344 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:13:42.676858  253344 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:13:42.676927  253344 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:13:43.037169  253344 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:13:43.406779  253344 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:13:44.008980  253344 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:13:44.046618  253344 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:13:44.671406  253344 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:13:44.672034  253344 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:13:44.676269  253344 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 09:13:40.770796  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:42.771402  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:42.983033  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:13:44.678057  253344 out.go:252]   - Booting up control plane ...
	I1025 09:13:44.678182  253344 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:13:44.678306  253344 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:13:44.679171  253344 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:13:44.693039  253344 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:13:44.693168  253344 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:13:44.700001  253344 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:13:44.701315  253344 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:13:44.701364  253344 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:13:44.802920  253344 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:13:44.803062  253344 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:13:45.303734  253344 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.907174ms
	I1025 09:13:45.308139  253344 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:13:45.308262  253344 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1025 09:13:45.308377  253344 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:13:45.308491  253344 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:13:46.856938  253344 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.548713234s
	I1025 09:13:47.362030  253344 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.053769612s
	I1025 09:13:49.310555  253344 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00236809s
	I1025 09:13:49.323457  253344 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:13:49.336425  253344 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:13:49.347724  253344 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:13:49.348057  253344 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-891466 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:13:49.357897  253344 kubeadm.go:318] [bootstrap-token] Using token: a2dy9q.ohzp7ddafsou5lmk
	W1025 09:13:45.271388  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:47.274137  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:49.771519  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:49.359796  253344 out.go:252]   - Configuring RBAC rules ...
	I1025 09:13:49.359968  253344 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:13:49.362514  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:13:49.368552  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:13:49.372201  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:13:49.374852  253344 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:13:49.377367  253344 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:13:49.717433  253344 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:13:50.134284  253344 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:13:50.716951  253344 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:13:50.717840  253344 kubeadm.go:318] 
	I1025 09:13:50.717903  253344 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:13:50.717911  253344 kubeadm.go:318] 
	I1025 09:13:50.717981  253344 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:13:50.717988  253344 kubeadm.go:318] 
	I1025 09:13:50.718028  253344 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:13:50.718082  253344 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:13:50.718166  253344 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:13:50.718187  253344 kubeadm.go:318] 
	I1025 09:13:50.718248  253344 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:13:50.718257  253344 kubeadm.go:318] 
	I1025 09:13:50.718311  253344 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:13:50.718319  253344 kubeadm.go:318] 
	I1025 09:13:50.718386  253344 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:13:50.718499  253344 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:13:50.718595  253344 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:13:50.718611  253344 kubeadm.go:318] 
	I1025 09:13:50.718749  253344 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:13:50.718819  253344 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:13:50.718825  253344 kubeadm.go:318] 
	I1025 09:13:50.718901  253344 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token a2dy9q.ohzp7ddafsou5lmk \
	I1025 09:13:50.718995  253344 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:13:50.719016  253344 kubeadm.go:318] 	--control-plane 
	I1025 09:13:50.719022  253344 kubeadm.go:318] 
	I1025 09:13:50.719111  253344 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:13:50.719123  253344 kubeadm.go:318] 
	I1025 09:13:50.719240  253344 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token a2dy9q.ohzp7ddafsou5lmk \
	I1025 09:13:50.719362  253344 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:13:50.722786  253344 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:13:50.722911  253344 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:13:50.722940  253344 cni.go:84] Creating CNI manager for ""
	I1025 09:13:50.722953  253344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:13:50.724917  253344 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:13:47.984259  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 09:13:47.984327  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:13:47.984407  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:13:48.011779  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:13:48.011800  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:48.011806  225660 cri.go:89] found id: ""
	I1025 09:13:48.011815  225660 logs.go:282] 2 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:13:48.011882  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.016325  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.020013  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:13:48.020089  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:13:48.047225  225660 cri.go:89] found id: ""
	I1025 09:13:48.047255  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.047265  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:13:48.047284  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:13:48.047343  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:13:48.074288  225660 cri.go:89] found id: ""
	I1025 09:13:48.074315  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.074323  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:13:48.074330  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:13:48.074385  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:13:48.102465  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:48.102493  225660 cri.go:89] found id: ""
	I1025 09:13:48.102502  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:13:48.102562  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.106812  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:13:48.106865  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:13:48.133980  225660 cri.go:89] found id: ""
	I1025 09:13:48.134006  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.134016  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:13:48.134023  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:13:48.134126  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:13:48.161510  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:48.161530  225660 cri.go:89] found id: ""
	I1025 09:13:48.161538  225660 logs.go:282] 1 containers: [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:13:48.161594  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:13:48.165480  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:13:48.165544  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:13:48.193550  225660 cri.go:89] found id: ""
	I1025 09:13:48.193593  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.193603  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:13:48.193609  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:13:48.193676  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:13:48.220314  225660 cri.go:89] found id: ""
	I1025 09:13:48.220353  225660 logs.go:282] 0 containers: []
	W1025 09:13:48.220366  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:13:48.220386  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:13:48.220405  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 09:13:50.726217  253344 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:13:50.730935  253344 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:13:50.730957  253344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:13:50.744241  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:13:50.964956  253344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:13:50.965225  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:50.965274  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-891466 minikube.k8s.io/updated_at=2025_10_25T09_13_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=default-k8s-diff-port-891466 minikube.k8s.io/primary=true
	I1025 09:13:51.044672  253344 ops.go:34] apiserver oom_adj: -16
	I1025 09:13:51.044833  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:51.545833  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:52.045873  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:52.545835  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:53.045001  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:53.545347  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1025 09:13:52.271285  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:13:54.274192  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:13:54.045589  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:54.545579  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:55.045802  253344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:13:55.121446  253344 kubeadm.go:1113] duration metric: took 4.156284117s to wait for elevateKubeSystemPrivileges
	I1025 09:13:55.121479  253344 kubeadm.go:402] duration metric: took 15.142289536s to StartCluster
	I1025 09:13:55.121495  253344 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:55.121560  253344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:13:55.124188  253344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:13:55.124495  253344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:13:55.124486  253344 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:13:55.124511  253344 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:13:55.124609  253344 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-891466"
	I1025 09:13:55.124606  253344 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-891466"
	I1025 09:13:55.124674  253344 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-891466"
	I1025 09:13:55.124650  253344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-891466"
	I1025 09:13:55.124713  253344 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:55.124708  253344 host.go:66] Checking if "default-k8s-diff-port-891466" exists ...
	I1025 09:13:55.125207  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:55.125449  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:55.126232  253344 out.go:179] * Verifying Kubernetes components...
	I1025 09:13:55.127706  253344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:13:55.151616  253344 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:13:55.152241  253344 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-891466"
	I1025 09:13:55.152287  253344 host.go:66] Checking if "default-k8s-diff-port-891466" exists ...
	I1025 09:13:55.152839  253344 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:13:55.153281  253344 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:13:55.153364  253344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:13:55.153497  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:55.193998  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:55.195373  253344 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:13:55.195447  253344 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:13:55.195553  253344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:13:55.220795  253344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:13:55.234796  253344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:13:55.283078  253344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:13:55.317608  253344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:13:55.340749  253344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:13:55.430240  253344 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 09:13:55.431605  253344 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-891466" to be "Ready" ...
	I1025 09:13:55.660404  253344 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 25 09:13:10 no-preload-016092 crio[567]: time="2025-10-25T09:13:10.220527928Z" level=info msg="Created container 9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4/kubernetes-dashboard" id=4dffcfc9-602a-4417-89c5-4633ea54e5fa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:10 no-preload-016092 crio[567]: time="2025-10-25T09:13:10.221212261Z" level=info msg="Starting container: 9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc" id=4c0871d9-0b23-4ee6-9755-d8adbd95fe39 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:13:10 no-preload-016092 crio[567]: time="2025-10-25T09:13:10.223282018Z" level=info msg="Started container" PID=1728 containerID=9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4/kubernetes-dashboard id=4c0871d9-0b23-4ee6-9755-d8adbd95fe39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d3e761b6febe0d9e746c5e6ab6eae31fb3c3e60051c9aeb52bbaf9ca2804109
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.203544116Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=59592001-4b98-4dc5-a617-39463e1f3ee9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.204696112Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f0fd89c1-9a50-4020-8701-d9b1e8cdc5e8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.206100259Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper" id=ff5125c0-c4d4-4bcd-a5c4-8ef919797977 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.206251613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.212099392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.212591038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.234703047Z" level=info msg="Created container 48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper" id=ff5125c0-c4d4-4bcd-a5c4-8ef919797977 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.235363043Z" level=info msg="Starting container: 48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957" id=10ef94d9-f960-4a82-a44b-7163a47b566f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.237157788Z" level=info msg="Started container" PID=1746 containerID=48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper id=10ef94d9-f960-4a82-a44b-7163a47b566f name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb7e2419735d896306c88d1b65db425c69c65403f67a9fb4a1f1aac8762cf4a5
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.334133678Z" level=info msg="Removing container: 2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749" id=cfeed54d-ec00-437e-a35d-d3a700093dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:13:26 no-preload-016092 crio[567]: time="2025-10-25T09:13:26.345080011Z" level=info msg="Removed container 2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh/dashboard-metrics-scraper" id=cfeed54d-ec00-437e-a35d-d3a700093dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.345365493Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a7de27b4-d509-4413-95be-920e7fd16136 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.346449098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=85280be6-38da-49df-ad19-bff4cf22872e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.34758891Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ac8cf5dc-ea82-4547-be69-3825b602fa49 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.347776503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355270442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355510635Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eaa077ef4f275978fef15e106b6388606535b93fb39fe0c599d4c9b1be196eeb/merged/etc/passwd: no such file or directory"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355549712Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eaa077ef4f275978fef15e106b6388606535b93fb39fe0c599d4c9b1be196eeb/merged/etc/group: no such file or directory"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.355915213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.379984563Z" level=info msg="Created container 9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf: kube-system/storage-provisioner/storage-provisioner" id=ac8cf5dc-ea82-4547-be69-3825b602fa49 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.380775632Z" level=info msg="Starting container: 9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf" id=95d722b8-07b3-410f-8684-e1326c779f29 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:13:29 no-preload-016092 crio[567]: time="2025-10-25T09:13:29.383074907Z" level=info msg="Started container" PID=1760 containerID=9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf description=kube-system/storage-provisioner/storage-provisioner id=95d722b8-07b3-410f-8684-e1326c779f29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05567045ae6d91743c3ac4c6da11880ab9c59f08c5bc369e54b849ea72b6086b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9bd58a21f5517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   05567045ae6d9       storage-provisioner                          kube-system
	48ee308605e8a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   cb7e2419735d8       dashboard-metrics-scraper-6ffb444bf9-ft5jh   kubernetes-dashboard
	9a3c9cdae69ba       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago       Running             kubernetes-dashboard        0                   4d3e761b6febe       kubernetes-dashboard-855c9754f9-jnwc4        kubernetes-dashboard
	99317f7c2bffa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   8127a224132ce       coredns-66bc5c9577-g85s4                     kube-system
	e14e55c01173c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   cc55acd33cb5a       busybox                                      default
	ffd907d4e4196       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   05567045ae6d9       storage-provisioner                          kube-system
	9555087b4a95d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   bcaa56827f6d7       kindnet-mjnmk                                kube-system
	51bc04f01d285       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   f46a7bf426b16       kube-proxy-h4nh4                             kube-system
	33011a5a64acf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   51350c944ea62       kube-scheduler-no-preload-016092             kube-system
	6ac72fdf21daf       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   f1743fbf06896       kube-controller-manager-no-preload-016092    kube-system
	023f43058735f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   5e38d99bd7225       kube-apiserver-no-preload-016092             kube-system
	3e8098e047ed3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   38439de2eda64       etcd-no-preload-016092                       kube-system
	
	
	==> coredns [99317f7c2bffae4d40739f1b3aa6bab2ce12ad89e6c1c3c128a638478a0960af] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40578 - 37783 "HINFO IN 2295610001495175142.4577198704993816597. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.454777121s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-016092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-016092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=no-preload-016092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_12_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:11:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-016092
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:13:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:11:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:13:28 +0000   Sat, 25 Oct 2025 09:12:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-016092
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b1944563-5e07-4c47-8e9f-57e7b42f6bfa
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-g85s4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-016092                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-mjnmk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-016092              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-016092     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-h4nh4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-016092              100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ft5jh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jnwc4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 58s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node no-preload-016092 event: Registered Node no-preload-016092 in Controller
	  Normal  NodeReady                98s                  kubelet          Node no-preload-016092 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-016092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-016092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-016092 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                  node-controller  Node no-preload-016092 event: Registered Node no-preload-016092 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [3e8098e047ed3043a00cc812d78042ae68cad7ea01ba443d06753c58aca09dec] <==
	{"level":"warn","ts":"2025-10-25T09:12:56.835212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.844705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.853174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.862422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.870387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.878411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.902307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.919846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.928307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.936566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.944020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.952045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.966683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.973993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:56.983209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:12:57.036620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47608","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:13:05.376722Z","caller":"traceutil/trace.go:172","msg":"trace[362027365] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"111.615635ms","start":"2025-10-25T09:13:05.265081Z","end":"2025-10-25T09:13:05.376697Z","steps":["trace[362027365] 'process raft request'  (duration: 111.465014ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:13:05.380690Z","caller":"traceutil/trace.go:172","msg":"trace[2064125728] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"113.983239ms","start":"2025-10-25T09:13:05.266687Z","end":"2025-10-25T09:13:05.380670Z","steps":["trace[2064125728] 'process raft request'  (duration: 113.845494ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:13:05.662592Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.004676ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789566310212764 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qfpkf\" mod_revision:597 > success:<request_put:<key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qfpkf\" value_size:1159 >> failure:<request_range:<key:\"/registry/endpointslices/kubernetes-dashboard/dashboard-metrics-scraper-qfpkf\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:13:05.662831Z","caller":"traceutil/trace.go:172","msg":"trace[1653880554] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"278.327307ms","start":"2025-10-25T09:13:05.384479Z","end":"2025-10-25T09:13:05.662807Z","steps":["trace[1653880554] 'process raft request'  (duration: 45.630417ms)","trace[1653880554] 'compare'  (duration: 231.908941ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:13:05.662838Z","caller":"traceutil/trace.go:172","msg":"trace[1122351575] linearizableReadLoop","detail":"{readStateIndex:636; appliedIndex:634; }","duration":"115.138222ms","start":"2025-10-25T09:13:05.547691Z","end":"2025-10-25T09:13:05.662829Z","steps":["trace[1122351575] 'read index received'  (duration: 33.805µs)","trace[1122351575] 'applied index is now lower than readState.Index'  (duration: 115.103881ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:13:05.662931Z","caller":"traceutil/trace.go:172","msg":"trace[1093739187] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"278.439195ms","start":"2025-10-25T09:13:05.384484Z","end":"2025-10-25T09:13:05.662923Z","steps":["trace[1093739187] 'process raft request'  (duration: 278.257127ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:13:05.663133Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.438065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh.1871b109b15f1598\" limit:1 ","response":"range_response_count:1 size:874"}
	{"level":"info","ts":"2025-10-25T09:13:05.663171Z","caller":"traceutil/trace.go:172","msg":"trace[1711968475] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh.1871b109b15f1598; range_end:; response_count:1; response_revision:607; }","duration":"115.487113ms","start":"2025-10-25T09:13:05.547675Z","end":"2025-10-25T09:13:05.663162Z","steps":["trace[1711968475] 'agreement among raft nodes before linearized reading'  (duration: 115.343872ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:13:32.744609Z","caller":"traceutil/trace.go:172","msg":"trace[134763161] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"141.125246ms","start":"2025-10-25T09:13:32.603462Z","end":"2025-10-25T09:13:32.744587Z","steps":["trace[134763161] 'process raft request'  (duration: 79.677967ms)","trace[134763161] 'compare'  (duration: 61.313721ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:13:57 up 56 min,  0 user,  load average: 2.51, 3.13, 2.14
	Linux no-preload-016092 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9555087b4a95dd49c3a02af93de2be326ddca27814e2068040e5e19d323de57c] <==
	I1025 09:12:58.806033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:12:58.806295       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:12:58.806507       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:12:58.806526       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:12:58.806557       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:12:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:12:59.090822       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:12:59.090849       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:12:59.090859       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:12:59.091048       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:12:59.491740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:12:59.491778       1 metrics.go:72] Registering metrics
	I1025 09:12:59.491906       1 controller.go:711] "Syncing nftables rules"
	I1025 09:13:09.090096       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:09.090154       1 main.go:301] handling current node
	I1025 09:13:19.090927       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:19.090997       1 main.go:301] handling current node
	I1025 09:13:29.090016       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:29.090042       1 main.go:301] handling current node
	I1025 09:13:39.090417       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:39.090482       1 main.go:301] handling current node
	I1025 09:13:49.090493       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:13:49.090533       1 main.go:301] handling current node
	
	
	==> kube-apiserver [023f43058735fc1aa667aba8a40553db5ed69c2c3aa83f526a3647121923840a] <==
	I1025 09:12:57.553839       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:12:57.553853       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:12:57.553859       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:12:57.553865       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:12:57.554069       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:12:57.554081       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:12:57.556758       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:12:57.556825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:12:57.559679       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:12:57.561280       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:12:57.569138       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:12:57.599423       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:12:57.600780       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:12:57.819561       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:12:57.854163       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:12:57.874845       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:12:57.882286       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:12:57.888464       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:12:57.925024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.204.166"}
	I1025 09:12:57.935560       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.183.242"}
	I1025 09:12:58.456884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:13:00.728239       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:13:00.925456       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:13:00.974136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:13:00.974137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6ac72fdf21daf14e251d8647264ae6703ade9663ba42a5c79cbd7ff91e1f523d] <==
	I1025 09:13:00.372196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:13:00.372234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:13:00.372236       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:13:00.372273       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:13:00.372298       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:13:00.372333       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:13:00.372340       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:13:00.372363       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:13:00.372364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:13:00.372545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:13:00.373693       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:13:00.373728       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:13:00.373810       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:13:00.373912       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-016092"
	I1025 09:13:00.373975       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:13:00.376098       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:13:00.376116       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:13:00.376757       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:13:00.376775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:13:00.376784       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:13:00.378480       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:13:00.380435       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:13:00.390701       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:13:00.396975       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:13:00.400268       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [51bc04f01d285b33d2ffd2d4857d9986a3d390c118d677a906b8b1b3854fcffe] <==
	I1025 09:12:58.659060       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:12:58.751423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:12:58.852265       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:12:58.852309       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:12:58.852482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:12:58.875662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:12:58.875721       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:12:58.880877       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:12:58.881235       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:12:58.881322       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:12:58.882946       1 config.go:200] "Starting service config controller"
	I1025 09:12:58.882970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:12:58.882978       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:12:58.882995       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:12:58.882997       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:12:58.883022       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:12:58.883080       1 config.go:309] "Starting node config controller"
	I1025 09:12:58.883102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:12:58.883111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:12:58.983550       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:12:58.983548       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:12:58.983560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33011a5a64acfce349c374b43be041eef3d52dab4c91a5a31072f67152719323] <==
	I1025 09:12:56.216532       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:12:57.473748       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:12:57.473786       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:12:57.473798       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:12:57.473808       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:12:57.527163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:12:57.527279       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:12:57.531311       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:12:57.531356       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:12:57.533608       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:12:57.533694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:12:57.632054       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:13:00 no-preload-016092 kubelet[717]: I1025 09:13:00.922623     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpp9\" (UniqueName: \"kubernetes.io/projected/2d30e5f2-2721-44b1-bd1f-e3da225a334d-kube-api-access-gkpp9\") pod \"kubernetes-dashboard-855c9754f9-jnwc4\" (UID: \"2d30e5f2-2721-44b1-bd1f-e3da225a334d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4"
	Oct 25 09:13:00 no-preload-016092 kubelet[717]: I1025 09:13:00.922662     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2d30e5f2-2721-44b1-bd1f-e3da225a334d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jnwc4\" (UID: \"2d30e5f2-2721-44b1-bd1f-e3da225a334d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4"
	Oct 25 09:13:05 no-preload-016092 kubelet[717]: I1025 09:13:05.260139     717 scope.go:117] "RemoveContainer" containerID="85b26a0323362c5da7d73760586bf2307648046f556e05c42d8ca30f6299375e"
	Oct 25 09:13:06 no-preload-016092 kubelet[717]: I1025 09:13:06.265677     717 scope.go:117] "RemoveContainer" containerID="85b26a0323362c5da7d73760586bf2307648046f556e05c42d8ca30f6299375e"
	Oct 25 09:13:06 no-preload-016092 kubelet[717]: I1025 09:13:06.265806     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:06 no-preload-016092 kubelet[717]: E1025 09:13:06.266098     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:07 no-preload-016092 kubelet[717]: I1025 09:13:07.270070     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:07 no-preload-016092 kubelet[717]: E1025 09:13:07.270261     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:08 no-preload-016092 kubelet[717]: I1025 09:13:08.144953     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:13:11 no-preload-016092 kubelet[717]: I1025 09:13:11.807557     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jnwc4" podStartSLOduration=2.919893607 podStartE2EDuration="11.807537083s" podCreationTimestamp="2025-10-25 09:13:00 +0000 UTC" firstStartedPulling="2025-10-25 09:13:01.184168955 +0000 UTC m=+6.079970731" lastFinishedPulling="2025-10-25 09:13:10.071812429 +0000 UTC m=+14.967614207" observedRunningTime="2025-10-25 09:13:10.33504336 +0000 UTC m=+15.230845156" watchObservedRunningTime="2025-10-25 09:13:11.807537083 +0000 UTC m=+16.703338880"
	Oct 25 09:13:12 no-preload-016092 kubelet[717]: I1025 09:13:12.597099     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:12 no-preload-016092 kubelet[717]: E1025 09:13:12.597297     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: I1025 09:13:26.202923     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: I1025 09:13:26.332150     717 scope.go:117] "RemoveContainer" containerID="2360ac8d351f62c57c5de22a7613dea6826a4226cdc4271e9f7876bf71e73749"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: I1025 09:13:26.332628     717 scope.go:117] "RemoveContainer" containerID="48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	Oct 25 09:13:26 no-preload-016092 kubelet[717]: E1025 09:13:26.333083     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:29 no-preload-016092 kubelet[717]: I1025 09:13:29.344939     717 scope.go:117] "RemoveContainer" containerID="ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0"
	Oct 25 09:13:32 no-preload-016092 kubelet[717]: I1025 09:13:32.597296     717 scope.go:117] "RemoveContainer" containerID="48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	Oct 25 09:13:32 no-preload-016092 kubelet[717]: E1025 09:13:32.597565     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:45 no-preload-016092 kubelet[717]: I1025 09:13:45.204002     717 scope.go:117] "RemoveContainer" containerID="48ee308605e8ac7614906ca833ced98de2f96accf7db196184ad43ac857a9957"
	Oct 25 09:13:45 no-preload-016092 kubelet[717]: E1025 09:13:45.204206     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft5jh_kubernetes-dashboard(2eeddb83-82cd-4c57-b4d2-0d76ab4904ac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft5jh" podUID="2eeddb83-82cd-4c57-b4d2-0d76ab4904ac"
	Oct 25 09:13:52 no-preload-016092 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:13:52 no-preload-016092 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:13:52 no-preload-016092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:13:52 no-preload-016092 systemd[1]: kubelet.service: Consumed 1.808s CPU time.
	
	
	==> kubernetes-dashboard [9a3c9cdae69ba7daf54a1b9f51f10c4f4142122b82fc6630c756566fdbcdc5dc] <==
	2025/10/25 09:13:10 Using namespace: kubernetes-dashboard
	2025/10/25 09:13:10 Using in-cluster config to connect to apiserver
	2025/10/25 09:13:10 Using secret token for csrf signing
	2025/10/25 09:13:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:13:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:13:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:13:10 Generating JWE encryption key
	2025/10/25 09:13:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:13:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:13:10 Initializing JWE encryption key from synchronized object
	2025/10/25 09:13:10 Creating in-cluster Sidecar client
	2025/10/25 09:13:10 Serving insecurely on HTTP port: 9090
	2025/10/25 09:13:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:13:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:13:10 Starting overwatch
	
	
	==> storage-provisioner [9bd58a21f551717dd758daaa587f5900e985d4afef6a1c95e9fc626048acaccf] <==
	I1025 09:13:29.407415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:13:29.407467       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:13:29.409919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:32.932295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:37.196392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:40.794517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:43.848480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:46.870790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:46.876941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:13:46.877089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:13:46.877258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-016092_2d5f2cd9-5616-46f3-822c-58c6b4f99eca!
	I1025 09:13:46.877257       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed8381ef-ef55-4ab4-b1c1-024372829c5a", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-016092_2d5f2cd9-5616-46f3-822c-58c6b4f99eca became leader
	W1025 09:13:46.880297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:46.884354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:13:46.977930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-016092_2d5f2cd9-5616-46f3-822c-58c6b4f99eca!
	W1025 09:13:48.887573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:48.893544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:50.897252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:50.901974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:52.904790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:52.908731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:54.912118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:54.919225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:56.922423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:13:56.926499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ffd907d4e41966fee3111e9d894ecb29cd411f80ecf41a4d2d9381dfc6b25cb0] <==
	I1025 09:12:58.619939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:13:28.624016       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016092 -n no-preload-016092
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016092 -n no-preload-016092: exit status 2 (339.197712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-016092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (288.100289ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:14:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-891466 describe deploy/metrics-server -n kube-system: exit status 1 (79.793083ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-891466 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-891466
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-891466:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d",
	        "Created": "2025-10-25T09:13:33.96941541Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254264,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:13:34.008505359Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/hosts",
	        "LogPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d-json.log",
	        "Name": "/default-k8s-diff-port-891466",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-891466:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-891466",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d",
	                "LowerDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-891466",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-891466/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-891466",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-891466",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-891466",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ecc307abde3054a602e2f55cf17c99c1d577518ffa13202128bbbbbff017144a",
	            "SandboxKey": "/var/run/docker/netns/ecc307abde30",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-891466": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:ab:54:e5:49:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0da8ae663923a6a96619f04827a51fa66502ca86c536d48116f797af6b2cd6f",
	                    "EndpointID": "75c0bcd695de6f5f4cac31c8d3b7a49cae1e99b529af08d2afbfada6262b3dfb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-891466",
	                        "f52ce971b3b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-891466 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-891466 logs -n 25: (1.137449379s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:13 UTC │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:14:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:14:01.349429  259325 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:14:01.349695  259325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:01.349703  259325 out.go:374] Setting ErrFile to fd 2...
	I1025 09:14:01.349707  259325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:01.349881  259325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:14:01.350326  259325 out.go:368] Setting JSON to false
	I1025 09:14:01.351488  259325 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3389,"bootTime":1761380252,"procs":372,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:14:01.351566  259325 start.go:141] virtualization: kvm guest
	I1025 09:14:01.353581  259325 out.go:179] * [newest-cni-036155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:14:01.354862  259325 notify.go:220] Checking for updates...
	I1025 09:14:01.354911  259325 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:14:01.356248  259325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:14:01.357829  259325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:14:01.359191  259325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:14:01.360570  259325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:14:01.362056  259325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:14:01.363964  259325 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364078  259325 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364155  259325 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364286  259325 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:14:01.388723  259325 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:14:01.388851  259325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:14:01.446757  259325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:14:01.436278421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:14:01.446909  259325 docker.go:318] overlay module found
	I1025 09:14:01.448814  259325 out.go:179] * Using the docker driver based on user configuration
	I1025 09:14:01.449910  259325 start.go:305] selected driver: docker
	I1025 09:14:01.449923  259325 start.go:925] validating driver "docker" against <nil>
	I1025 09:14:01.449933  259325 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:14:01.450511  259325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:14:01.511090  259325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:14:01.500485086 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:14:01.511242  259325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 09:14:01.511267  259325 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 09:14:01.511481  259325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:14:01.513762  259325 out.go:179] * Using Docker driver with root privileges
	I1025 09:14:01.514937  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:01.515024  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:01.515037  259325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:14:01.515128  259325 start.go:349] cluster config:
	{Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:14:01.516524  259325 out.go:179] * Starting "newest-cni-036155" primary control-plane node in "newest-cni-036155" cluster
	I1025 09:14:01.517782  259325 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:14:01.518984  259325 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:14:01.520226  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:01.520270  259325 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:14:01.520295  259325 cache.go:58] Caching tarball of preloaded images
	I1025 09:14:01.520378  259325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:14:01.520391  259325 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:14:01.520490  259325 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:14:01.520629  259325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json ...
	I1025 09:14:01.520680  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json: {Name:mkbfe9b74fbf6dcc9fce3c2e514dd100d024d023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:01.542057  259325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:14:01.542076  259325 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:14:01.542091  259325 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:14:01.542116  259325 start.go:360] acquireMachinesLock for newest-cni-036155: {Name:mk5b9af4be10aaa846ed9c8c31160df3caae8c3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:14:01.542211  259325 start.go:364] duration metric: took 81.03µs to acquireMachinesLock for "newest-cni-036155"
	I1025 09:14:01.542235  259325 start.go:93] Provisioning new machine with config: &{Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:14:01.542374  259325 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:13:58.278667  225660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.058221196s)
	W1025 09:13:58.278709  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1025 09:13:58.278726  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:13:58.278748  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:13:58.315063  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:58.315094  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:58.352625  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:58.352693  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:58.381187  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:58.381214  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:58.436157  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:58.436186  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:58.492499  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:58.492535  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:58.528534  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:58.528568  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:58.632433  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:58.632471  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:01.149149  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:02.578502  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:57860->192.168.85.2:8443: read: connection reset by peer
	I1025 09:14:02.578582  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:02.578671  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:02.612993  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:02.613015  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:14:02.613019  225660 cri.go:89] found id: ""
	I1025 09:14:02.613026  225660 logs.go:282] 2 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:14:02.613087  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.617248  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.621187  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:02.621252  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:02.651262  225660 cri.go:89] found id: ""
	I1025 09:14:02.651292  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.651304  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:02.651315  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:02.651375  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:02.680223  225660 cri.go:89] found id: ""
	I1025 09:14:02.680246  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.680255  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:02.680261  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:02.680304  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:02.708376  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:02.708400  225660 cri.go:89] found id: ""
	I1025 09:14:02.708419  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:02.708470  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.712497  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:02.712567  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:02.743096  225660 cri.go:89] found id: ""
	I1025 09:14:02.743123  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.743135  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:02.743142  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:02.743189  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:02.776405  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:02.776424  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:02.776428  225660 cri.go:89] found id: ""
	I1025 09:14:02.776435  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:02.776494  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.780906  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.784758  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:02.784832  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1025 09:13:59.435798  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:01.935111  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:01.271430  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:14:03.770895  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:14:01.544621  259325 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:14:01.544863  259325 start.go:159] libmachine.API.Create for "newest-cni-036155" (driver="docker")
	I1025 09:14:01.544898  259325 client.go:168] LocalClient.Create starting
	I1025 09:14:01.544971  259325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:14:01.545008  259325 main.go:141] libmachine: Decoding PEM data...
	I1025 09:14:01.545033  259325 main.go:141] libmachine: Parsing certificate...
	I1025 09:14:01.545103  259325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:14:01.545131  259325 main.go:141] libmachine: Decoding PEM data...
	I1025 09:14:01.545157  259325 main.go:141] libmachine: Parsing certificate...
	I1025 09:14:01.545523  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:14:01.564874  259325 cli_runner.go:211] docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:14:01.564937  259325 network_create.go:284] running [docker network inspect newest-cni-036155] to gather additional debugging logs...
	I1025 09:14:01.564956  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155
	W1025 09:14:01.582897  259325 cli_runner.go:211] docker network inspect newest-cni-036155 returned with exit code 1
	I1025 09:14:01.582929  259325 network_create.go:287] error running [docker network inspect newest-cni-036155]: docker network inspect newest-cni-036155: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-036155 not found
	I1025 09:14:01.582945  259325 network_create.go:289] output of [docker network inspect newest-cni-036155]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-036155 not found
	
	** /stderr **
	I1025 09:14:01.583104  259325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:14:01.601343  259325 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:14:01.602058  259325 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:14:01.602766  259325 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:14:01.603404  259325 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:14:01.603905  259325 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9aa42478a513 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:4e:f8:f5:5b:2e} reservation:<nil>}
	I1025 09:14:01.604415  259325 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5d58a21465e1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4e:78:a8:09:a3:02} reservation:<nil>}
	I1025 09:14:01.605183  259325 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb4940}
	I1025 09:14:01.605204  259325 network_create.go:124] attempt to create docker network newest-cni-036155 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:14:01.605249  259325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-036155 newest-cni-036155
	I1025 09:14:01.664530  259325 network_create.go:108] docker network newest-cni-036155 192.168.103.0/24 created
	I1025 09:14:01.664563  259325 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-036155" container
	I1025 09:14:01.664653  259325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:14:01.684160  259325 cli_runner.go:164] Run: docker volume create newest-cni-036155 --label name.minikube.sigs.k8s.io=newest-cni-036155 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:14:01.703110  259325 oci.go:103] Successfully created a docker volume newest-cni-036155
	I1025 09:14:01.703199  259325 cli_runner.go:164] Run: docker run --rm --name newest-cni-036155-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-036155 --entrypoint /usr/bin/test -v newest-cni-036155:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:14:02.100402  259325 oci.go:107] Successfully prepared a docker volume newest-cni-036155
	I1025 09:14:02.100450  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:02.100473  259325 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:14:02.100556  259325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-036155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:14:02.813543  225660 cri.go:89] found id: ""
	I1025 09:14:02.813571  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.813581  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:02.813588  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:02.813668  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:02.843013  225660 cri.go:89] found id: ""
	I1025 09:14:02.843039  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.843049  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:02.843065  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:02.843079  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:02.858191  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:14:02.858224  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:14:02.894345  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:02.894398  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:02.924538  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:02.924566  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:02.981267  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:02.981304  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:03.096416  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:03.096461  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:03.168015  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:03.168040  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:03.168054  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:03.205969  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:03.206012  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:03.271485  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:03.271526  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:03.300749  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:03.300783  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:05.840548  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:05.841022  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:05.841081  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:05.841139  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:05.869264  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:05.869286  225660 cri.go:89] found id: ""
	I1025 09:14:05.869293  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:05.869340  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:05.873358  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:05.873414  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:05.901366  225660 cri.go:89] found id: ""
	I1025 09:14:05.901395  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.901406  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:05.901413  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:05.901467  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:05.931032  225660 cri.go:89] found id: ""
	I1025 09:14:05.931059  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.931069  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:05.931076  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:05.931142  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:05.959495  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:05.959515  225660 cri.go:89] found id: ""
	I1025 09:14:05.959523  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:05.959567  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:05.963756  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:05.963826  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:05.991899  225660 cri.go:89] found id: ""
	I1025 09:14:05.991925  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.991943  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:05.991953  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:05.992018  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:06.019791  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:06.019811  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:06.019815  225660 cri.go:89] found id: ""
	I1025 09:14:06.019822  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:06.019886  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:06.024190  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:06.028096  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:06.028161  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:06.055987  225660 cri.go:89] found id: ""
	I1025 09:14:06.056018  225660 logs.go:282] 0 containers: []
	W1025 09:14:06.056029  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:06.056035  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:06.056090  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:06.083950  225660 cri.go:89] found id: ""
	I1025 09:14:06.083976  225660 logs.go:282] 0 containers: []
	W1025 09:14:06.083987  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:06.084004  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:06.084019  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:06.110553  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:06.110582  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:06.164204  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:06.164238  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:06.253207  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:06.253241  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:06.313928  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:06.313953  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:06.313968  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:06.346421  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:06.346466  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:06.361467  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:06.361496  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:06.393406  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:06.393444  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:06.444918  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:06.444948  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	W1025 09:14:03.935273  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:06.435636  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	I1025 09:14:06.935349  253344 node_ready.go:49] node "default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:06.935378  253344 node_ready.go:38] duration metric: took 11.503747191s for node "default-k8s-diff-port-891466" to be "Ready" ...
	I1025 09:14:06.935390  253344 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:06.935479  253344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:06.948160  253344 api_server.go:72] duration metric: took 11.823550151s to wait for apiserver process to appear ...
	I1025 09:14:06.948193  253344 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:06.948215  253344 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:14:06.953340  253344 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:14:06.954553  253344 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:06.954586  253344 api_server.go:131] duration metric: took 6.384823ms to wait for apiserver health ...
	I1025 09:14:06.954598  253344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:06.958083  253344 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:06.958116  253344 system_pods.go:61] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:06.958122  253344 system_pods.go:61] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:06.958130  253344 system_pods.go:61] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:06.958135  253344 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:06.958140  253344 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:06.958151  253344 system_pods.go:61] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:06.958156  253344 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:06.958167  253344 system_pods.go:61] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:06.958175  253344 system_pods.go:74] duration metric: took 3.569351ms to wait for pod list to return data ...
	I1025 09:14:06.958188  253344 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:06.960663  253344 default_sa.go:45] found service account: "default"
	I1025 09:14:06.960687  253344 default_sa.go:55] duration metric: took 2.491182ms for default service account to be created ...
	I1025 09:14:06.960698  253344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:14:06.963911  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:06.963945  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:06.963955  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:06.963967  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:06.963974  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:06.963981  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:06.963989  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:06.964176  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:06.964191  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:06.964221  253344 retry.go:31] will retry after 290.946821ms: missing components: kube-dns
	I1025 09:14:07.261256  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.261299  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.261308  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.261319  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.261325  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.261331  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.261372  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.261383  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.261392  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.261412  253344 retry.go:31] will retry after 251.1932ms: missing components: kube-dns
	I1025 09:14:07.516457  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.516488  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.516494  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.516500  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.516504  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.516508  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.516512  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.516517  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.516524  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.516541  253344 retry.go:31] will retry after 312.108611ms: missing components: kube-dns
	I1025 09:14:07.832521  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.832555  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.832561  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.832567  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.832573  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.832577  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.832580  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.832584  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.832591  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.832610  253344 retry.go:31] will retry after 578.903074ms: missing components: kube-dns
	I1025 09:14:08.416051  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.416084  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Running
	I1025 09:14:08.416092  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:08.416099  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:08.416104  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:08.416109  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:08.416113  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:08.416116  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:08.416121  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Running
	I1025 09:14:08.416131  253344 system_pods.go:126] duration metric: took 1.455426427s to wait for k8s-apps to be running ...
	I1025 09:14:08.416145  253344 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:14:08.416197  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:14:08.429617  253344 system_svc.go:56] duration metric: took 13.46202ms WaitForService to wait for kubelet
	I1025 09:14:08.429689  253344 kubeadm.go:586] duration metric: took 13.305083699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:14:08.429711  253344 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:08.432623  253344 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:08.432665  253344 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:08.432680  253344 node_conditions.go:105] duration metric: took 2.964083ms to run NodePressure ...
	I1025 09:14:08.432693  253344 start.go:241] waiting for startup goroutines ...
	I1025 09:14:08.432702  253344 start.go:246] waiting for cluster config update ...
	I1025 09:14:08.432717  253344 start.go:255] writing updated cluster config ...
	I1025 09:14:08.432974  253344 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:08.436927  253344 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:08.440402  253344 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.444662  253344 pod_ready.go:94] pod "coredns-66bc5c9577-72zpn" is "Ready"
	I1025 09:14:08.444683  253344 pod_ready.go:86] duration metric: took 4.260186ms for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.446669  253344 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.450415  253344 pod_ready.go:94] pod "etcd-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.450440  253344 pod_ready.go:86] duration metric: took 3.750274ms for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.452271  253344 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.455682  253344 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.455704  253344 pod_ready.go:86] duration metric: took 3.413528ms for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.457512  253344 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:14:05.771472  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:14:08.271104  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:14:08.770948  247074 node_ready.go:49] node "embed-certs-106968" is "Ready"
	I1025 09:14:08.770978  247074 node_ready.go:38] duration metric: took 41.503136723s for node "embed-certs-106968" to be "Ready" ...
	I1025 09:14:08.770991  247074 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:08.771040  247074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:08.786566  247074 api_server.go:72] duration metric: took 41.819658043s to wait for apiserver process to appear ...
	I1025 09:14:08.786597  247074 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:08.786620  247074 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 09:14:08.791819  247074 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 09:14:08.792653  247074 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:08.792675  247074 api_server.go:131] duration metric: took 6.071281ms to wait for apiserver health ...
	I1025 09:14:08.792683  247074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:08.796024  247074 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:08.796066  247074 system_pods.go:61] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.796076  247074 system_pods.go:61] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.796088  247074 system_pods.go:61] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.796094  247074 system_pods.go:61] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.796103  247074 system_pods.go:61] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.796108  247074 system_pods.go:61] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.796114  247074 system_pods.go:61] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.796119  247074 system_pods.go:61] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.796133  247074 system_pods.go:74] duration metric: took 3.442989ms to wait for pod list to return data ...
	I1025 09:14:08.796148  247074 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:08.798369  247074 default_sa.go:45] found service account: "default"
	I1025 09:14:08.798387  247074 default_sa.go:55] duration metric: took 2.229844ms for default service account to be created ...
	I1025 09:14:08.798394  247074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:14:08.801058  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.801082  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.801088  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.801093  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.801096  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.801100  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.801104  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.801107  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.801112  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.801132  247074 retry.go:31] will retry after 190.781972ms: missing components: kube-dns
	I1025 09:14:08.995887  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.995925  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.995933  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.995941  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.995947  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.995954  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.995959  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.995966  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.995974  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.995996  247074 retry.go:31] will retry after 247.582365ms: missing components: kube-dns
	I1025 09:14:09.247882  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:09.247915  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:09.247921  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:09.247927  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:09.247931  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:09.247935  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:09.247940  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:09.247944  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:09.247949  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:09.247963  247074 retry.go:31] will retry after 418.536389ms: missing components: kube-dns
	I1025 09:14:09.670936  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:09.670969  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Running
	I1025 09:14:09.670977  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:09.670983  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:09.670988  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:09.670993  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:09.670998  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:09.671006  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:09.671011  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Running
	I1025 09:14:09.671021  247074 system_pods.go:126] duration metric: took 872.62006ms to wait for k8s-apps to be running ...
	I1025 09:14:09.671033  247074 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:14:09.671082  247074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:14:09.684149  247074 system_svc.go:56] duration metric: took 13.109824ms WaitForService to wait for kubelet
	I1025 09:14:09.684176  247074 kubeadm.go:586] duration metric: took 42.717274637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:14:09.684197  247074 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:09.687014  247074 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:09.687037  247074 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:09.687050  247074 node_conditions.go:105] duration metric: took 2.847789ms to run NodePressure ...
	I1025 09:14:09.687060  247074 start.go:241] waiting for startup goroutines ...
	I1025 09:14:09.687067  247074 start.go:246] waiting for cluster config update ...
	I1025 09:14:09.687077  247074 start.go:255] writing updated cluster config ...
	I1025 09:14:09.687328  247074 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:09.691103  247074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:09.694610  247074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.698843  247074 pod_ready.go:94] pod "coredns-66bc5c9577-dx4j4" is "Ready"
	I1025 09:14:09.698866  247074 pod_ready.go:86] duration metric: took 4.23265ms for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.700733  247074 pod_ready.go:83] waiting for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.704283  247074 pod_ready.go:94] pod "etcd-embed-certs-106968" is "Ready"
	I1025 09:14:09.704303  247074 pod_ready.go:86] duration metric: took 3.551149ms for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.706066  247074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.709547  247074 pod_ready.go:94] pod "kube-apiserver-embed-certs-106968" is "Ready"
	I1025 09:14:09.709564  247074 pod_ready.go:86] duration metric: took 3.482629ms for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.711117  247074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.840767  253344 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.840794  253344 pod_ready.go:86] duration metric: took 383.263633ms for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.041420  253344 pod_ready.go:83] waiting for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.441977  253344 pod_ready.go:94] pod "kube-proxy-rmqbr" is "Ready"
	I1025 09:14:09.442007  253344 pod_ready.go:86] duration metric: took 400.561652ms for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.641678  253344 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.041042  253344 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:10.041068  253344 pod_ready.go:86] duration metric: took 399.361298ms for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.041080  253344 pod_ready.go:40] duration metric: took 1.604125716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:10.083846  253344 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:10.085911  253344 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-891466" cluster and "default" namespace by default
	I1025 09:14:10.095667  247074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-106968" is "Ready"
	I1025 09:14:10.095699  247074 pod_ready.go:86] duration metric: took 384.564763ms for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.296396  247074 pod_ready.go:83] waiting for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.695915  247074 pod_ready.go:94] pod "kube-proxy-sm8hw" is "Ready"
	I1025 09:14:10.695940  247074 pod_ready.go:86] duration metric: took 399.512784ms for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.895258  247074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:11.295963  247074 pod_ready.go:94] pod "kube-scheduler-embed-certs-106968" is "Ready"
	I1025 09:14:11.295996  247074 pod_ready.go:86] duration metric: took 400.705834ms for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:11.296011  247074 pod_ready.go:40] duration metric: took 1.604868452s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:11.348313  247074 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:06.610431  259325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-036155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.50981258s)
	I1025 09:14:06.610467  259325 kic.go:203] duration metric: took 4.509989969s to extract preloaded images to volume ...
	W1025 09:14:06.610587  259325 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:14:06.610634  259325 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:14:06.610712  259325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:14:06.666144  259325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-036155 --name newest-cni-036155 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-036155 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-036155 --network newest-cni-036155 --ip 192.168.103.2 --volume newest-cni-036155:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:14:06.972900  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Running}}
	I1025 09:14:06.993336  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.013258  259325 cli_runner.go:164] Run: docker exec newest-cni-036155 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:14:07.057407  259325 oci.go:144] the created container "newest-cni-036155" has a running status.
	I1025 09:14:07.057438  259325 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa...
	I1025 09:14:07.113913  259325 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:14:07.147153  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.167068  259325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:14:07.167088  259325 kic_runner.go:114] Args: [docker exec --privileged newest-cni-036155 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:14:07.214916  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.241483  259325 machine.go:93] provisionDockerMachine start ...
	I1025 09:14:07.241575  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:07.268234  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:07.268673  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:07.268698  259325 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:14:07.269464  259325 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37304->127.0.0.1:33085: read: connection reset by peer
	I1025 09:14:10.411580  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-036155
	
	I1025 09:14:10.411618  259325 ubuntu.go:182] provisioning hostname "newest-cni-036155"
	I1025 09:14:10.411703  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:10.430482  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:10.430731  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:10.430747  259325 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-036155 && echo "newest-cni-036155" | sudo tee /etc/hostname
	I1025 09:14:10.585307  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-036155
	
	I1025 09:14:10.585419  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:10.606084  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:10.606313  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:10.606331  259325 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-036155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-036155/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-036155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:14:10.747795  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:14:10.747824  259325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:14:10.747864  259325 ubuntu.go:190] setting up certificates
	I1025 09:14:10.747881  259325 provision.go:84] configureAuth start
	I1025 09:14:10.747955  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:10.766485  259325 provision.go:143] copyHostCerts
	I1025 09:14:10.766572  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:14:10.766587  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:14:10.766695  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:14:10.766836  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:14:10.766852  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:14:10.766897  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:14:10.766999  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:14:10.767008  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:14:10.767046  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:14:10.767144  259325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.newest-cni-036155 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-036155]
	I1025 09:14:11.350247  247074 out.go:179] * Done! kubectl is now configured to use "embed-certs-106968" cluster and "default" namespace by default
	I1025 09:14:08.972298  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:08.972739  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:08.972796  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:08.972855  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:09.003134  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:09.003160  225660 cri.go:89] found id: ""
	I1025 09:14:09.003170  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:09.003229  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.007677  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:09.007750  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:09.038302  225660 cri.go:89] found id: ""
	I1025 09:14:09.038326  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.038335  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:09.038341  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:09.038431  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:09.066635  225660 cri.go:89] found id: ""
	I1025 09:14:09.066680  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.066692  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:09.066698  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:09.066754  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:09.093560  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:09.093582  225660 cri.go:89] found id: ""
	I1025 09:14:09.093591  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:09.093678  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.097667  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:09.097735  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:09.124755  225660 cri.go:89] found id: ""
	I1025 09:14:09.124779  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.124787  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:09.124792  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:09.124838  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:09.151173  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:09.151200  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:09.151206  225660 cri.go:89] found id: ""
	I1025 09:14:09.151216  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:09.151274  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.155517  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.159318  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:09.159371  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:09.185902  225660 cri.go:89] found id: ""
	I1025 09:14:09.185929  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.185937  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:09.185942  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:09.185990  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:09.213382  225660 cri.go:89] found id: ""
	I1025 09:14:09.213406  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.213414  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:09.213427  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:09.213437  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:09.227962  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:09.227989  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:09.286897  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:09.286914  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:09.286930  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:09.344244  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:09.344280  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:09.372387  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:09.372412  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:09.404393  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:09.404442  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:09.445740  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:09.445773  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:09.473530  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:09.473557  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:09.530325  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:09.530359  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:12.126696  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:12.127001  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:12.127041  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:12.127078  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:12.156258  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:12.156278  225660 cri.go:89] found id: ""
	I1025 09:14:12.156286  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:12.156333  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.160830  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:12.160899  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:12.189251  225660 cri.go:89] found id: ""
	I1025 09:14:12.189276  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.189284  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:12.189291  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:12.189345  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:12.218011  225660 cri.go:89] found id: ""
	I1025 09:14:12.218040  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.218051  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:12.218058  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:12.218110  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:12.246768  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:12.246792  225660 cri.go:89] found id: ""
	I1025 09:14:12.246800  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:12.246849  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.250850  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:12.250911  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:12.279387  225660 cri.go:89] found id: ""
	I1025 09:14:12.279415  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.279430  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:12.279435  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:12.279493  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:12.309764  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:12.309788  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:12.309794  225660 cri.go:89] found id: ""
	I1025 09:14:12.309803  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:12.309858  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.314431  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.318673  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:12.318743  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:12.348251  225660 cri.go:89] found id: ""
	I1025 09:14:12.348282  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.348293  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:12.348301  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:12.348354  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:12.376469  225660 cri.go:89] found id: ""
	I1025 09:14:12.376500  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.376517  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:12.376532  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:12.376543  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:12.481987  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:12.482020  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:12.501685  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:12.501719  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:12.561742  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:12.561777  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:12.595479  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:12.595510  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:12.657485  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:12.657516  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:12.724018  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:12.724046  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:12.724063  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:12.758682  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:12.758719  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:11.510510  259325 provision.go:177] copyRemoteCerts
	I1025 09:14:11.510574  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:14:11.510609  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.528759  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:11.630293  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:14:11.649620  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:14:11.667356  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:14:11.684854  259325 provision.go:87] duration metric: took 936.957621ms to configureAuth
	I1025 09:14:11.684892  259325 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:14:11.685064  259325 config.go:182] Loaded profile config "newest-cni-036155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:11.685161  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.703806  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:11.704008  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:11.704026  259325 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:14:11.968181  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:14:11.968209  259325 machine.go:96] duration metric: took 4.726701907s to provisionDockerMachine
	I1025 09:14:11.968221  259325 client.go:171] duration metric: took 10.423315226s to LocalClient.Create
	I1025 09:14:11.968243  259325 start.go:167] duration metric: took 10.423381733s to libmachine.API.Create "newest-cni-036155"
	I1025 09:14:11.968252  259325 start.go:293] postStartSetup for "newest-cni-036155" (driver="docker")
	I1025 09:14:11.968273  259325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:14:11.968342  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:14:11.968382  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.988313  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.091847  259325 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:14:12.096150  259325 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:14:12.096175  259325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:14:12.096187  259325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:14:12.096246  259325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:14:12.096338  259325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:14:12.096472  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:14:12.104581  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:14:12.125866  259325 start.go:296] duration metric: took 157.598101ms for postStartSetup
	I1025 09:14:12.126207  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:12.145205  259325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json ...
	I1025 09:14:12.145547  259325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:14:12.145602  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.166198  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.265965  259325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:14:12.271045  259325 start.go:128] duration metric: took 10.728656434s to createHost
	I1025 09:14:12.271079  259325 start.go:83] releasing machines lock for "newest-cni-036155", held for 10.728853828s
	I1025 09:14:12.271157  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:12.292688  259325 ssh_runner.go:195] Run: cat /version.json
	I1025 09:14:12.292723  259325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:14:12.292742  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.292793  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.314352  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.314667  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.483055  259325 ssh_runner.go:195] Run: systemctl --version
	I1025 09:14:12.490540  259325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:14:12.536231  259325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:14:12.541807  259325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:14:12.541870  259325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:14:12.571901  259325 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:14:12.571931  259325 start.go:495] detecting cgroup driver to use...
	I1025 09:14:12.571966  259325 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:14:12.572017  259325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:14:12.596449  259325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:14:12.611557  259325 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:14:12.611628  259325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:14:12.630533  259325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:14:12.648087  259325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:14:12.736517  259325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:14:12.839188  259325 docker.go:234] disabling docker service ...
	I1025 09:14:12.839286  259325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:14:12.859123  259325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:14:12.873528  259325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:14:12.959727  259325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:14:13.046275  259325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:14:13.059833  259325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:14:13.074282  259325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:14:13.074351  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.085056  259325 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:14:13.085131  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.094564  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.103436  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.112411  259325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:14:13.120618  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.129243  259325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.143332  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.152512  259325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:14:13.160145  259325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:14:13.167921  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:13.247586  259325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:14:13.369361  259325 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:14:13.369432  259325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:14:13.373738  259325 start.go:563] Will wait 60s for crictl version
	I1025 09:14:13.373798  259325 ssh_runner.go:195] Run: which crictl
	I1025 09:14:13.377873  259325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:14:13.402547  259325 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:14:13.402629  259325 ssh_runner.go:195] Run: crio --version
	I1025 09:14:13.435875  259325 ssh_runner.go:195] Run: crio --version
	I1025 09:14:13.466340  259325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:14:13.467881  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:14:13.486741  259325 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:14:13.491163  259325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:14:13.503996  259325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:14:13.505132  259325 kubeadm.go:883] updating cluster {Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:14:13.505308  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:13.505385  259325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:14:13.537110  259325 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:14:13.537138  259325 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:14:13.537208  259325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:14:13.565601  259325 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:14:13.565629  259325 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:14:13.565668  259325 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:14:13.565770  259325 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-036155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:14:13.565852  259325 ssh_runner.go:195] Run: crio config
	I1025 09:14:13.613362  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:13.613386  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:13.613402  259325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:14:13.613423  259325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-036155 NodeName:newest-cni-036155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:14:13.613560  259325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-036155"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:14:13.613625  259325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:14:13.621734  259325 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:14:13.621798  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:14:13.629658  259325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:14:13.642503  259325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:14:13.657918  259325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:14:13.670798  259325 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:14:13.674428  259325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:14:13.684203  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:13.764843  259325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:14:13.785140  259325 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155 for IP: 192.168.103.2
	I1025 09:14:13.785167  259325 certs.go:195] generating shared ca certs ...
	I1025 09:14:13.785187  259325 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:13.785344  259325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:14:13.785395  259325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:14:13.785408  259325 certs.go:257] generating profile certs ...
	I1025 09:14:13.785477  259325 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key
	I1025 09:14:13.785494  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt with IP's: []
	I1025 09:14:14.040562  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt ...
	I1025 09:14:14.040589  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt: {Name:mk646b8f9783dd9e4707890963ea7e898faa4fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.040796  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key ...
	I1025 09:14:14.040814  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key: {Name:mkc53418ebf76ccde9e19bfb0999b44fd01a281b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.040936  259325 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f
	I1025 09:14:14.040955  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:14:14.178872  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f ...
	I1025 09:14:14.178902  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f: {Name:mk6d40b7bebb79f6059b96eb77ffd7cc4e3645e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.179108  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f ...
	I1025 09:14:14.179126  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f: {Name:mkbc6d5a1a1415943f145cdf28bbee21fccbc4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.179228  259325 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt
	I1025 09:14:14.179331  259325 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key
	I1025 09:14:14.179401  259325 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key
	I1025 09:14:14.179419  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt with IP's: []
	I1025 09:14:14.456160  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt ...
	I1025 09:14:14.456187  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt: {Name:mk6afabad4b505221210ee1843d1e445e48419a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.456387  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key ...
	I1025 09:14:14.456405  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key: {Name:mk8b837757d816131e1957def20b89352fbd6a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.456615  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:14:14.456680  259325 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:14:14.456693  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:14:14.456721  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:14:14.456755  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:14:14.456784  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:14:14.456839  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:14:14.457424  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:14:14.475888  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:14:14.494249  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:14:14.512460  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:14:14.530466  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:14:14.550613  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:14:14.569632  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:14:14.588778  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:14:14.607411  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:14:14.627793  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:14:14.645515  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:14:14.662990  259325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:14:14.675778  259325 ssh_runner.go:195] Run: openssl version
	I1025 09:14:14.682117  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:14:14.690896  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.694728  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.694786  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.729366  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:14:14.738443  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:14:14.747390  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.751269  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.751325  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.787080  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:14:14.797279  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:14:14.806185  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.809958  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.810016  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.844458  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:14:14.853408  259325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:14:14.857106  259325 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:14:14.857170  259325 kubeadm.go:400] StartCluster: {Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:14:14.857267  259325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:14:14.857318  259325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:14:14.885203  259325 cri.go:89] found id: ""
	I1025 09:14:14.885275  259325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:14:14.894314  259325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:14:14.902526  259325 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:14:14.902581  259325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:14:14.910548  259325 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:14:14.910567  259325 kubeadm.go:157] found existing configuration files:
	
	I1025 09:14:14.910606  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:14:14.918559  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:14:14.918617  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:14:14.926037  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:14:14.933744  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:14:14.933812  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:14:14.941147  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:14:14.949023  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:14:14.949074  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:14:14.956352  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:14:14.963871  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:14:14.963917  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:14:14.971281  259325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:14:15.012944  259325 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:14:15.013018  259325 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:14:15.034481  259325 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:14:15.034629  259325 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:14:15.034715  259325 kubeadm.go:318] OS: Linux
	I1025 09:14:15.034799  259325 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:14:15.034865  259325 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:14:15.034941  259325 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:14:15.035026  259325 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:14:15.035104  259325 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:14:15.035174  259325 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:14:15.035234  259325 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:14:15.035306  259325 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:14:15.095588  259325 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:14:15.095759  259325 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:14:15.095880  259325 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:14:15.103017  259325 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:14:15.106084  259325 out.go:252]   - Generating certificates and keys ...
	I1025 09:14:15.106182  259325 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:14:15.106260  259325 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:14:15.271964  259325 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:14:15.313276  259325 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:14:15.508442  259325 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:14:15.535170  259325 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:14:15.844944  259325 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:14:15.845122  259325 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-036155] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:14:16.013299  259325 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:14:16.013491  259325 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-036155] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:14:16.266960  259325 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:14:12.796156  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:12.796183  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:15.330709  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:15.331131  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:15.331185  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:15.331257  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:15.361721  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:15.361747  225660 cri.go:89] found id: ""
	I1025 09:14:15.361757  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:15.361820  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.366052  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:15.366106  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:15.393921  225660 cri.go:89] found id: ""
	I1025 09:14:15.393946  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.393953  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:15.393958  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:15.394003  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:15.421456  225660 cri.go:89] found id: ""
	I1025 09:14:15.421483  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.421494  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:15.421501  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:15.421566  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:15.449595  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:15.449622  225660 cri.go:89] found id: ""
	I1025 09:14:15.449631  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:15.449706  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.453889  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:15.453971  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:15.481414  225660 cri.go:89] found id: ""
	I1025 09:14:15.481440  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.481450  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:15.481458  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:15.481532  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:15.509346  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:15.509385  225660 cri.go:89] found id: ""
	I1025 09:14:15.509395  225660 logs.go:282] 1 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692]
	I1025 09:14:15.509452  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.513693  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:15.513759  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:15.540722  225660 cri.go:89] found id: ""
	I1025 09:14:15.540753  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.540765  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:15.540772  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:15.540828  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:15.569576  225660 cri.go:89] found id: ""
	I1025 09:14:15.569607  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.569618  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:15.569630  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:15.569659  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:15.625756  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:15.625804  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:15.657463  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:15.657491  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:15.745931  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:15.745976  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:15.761570  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:15.761599  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:15.820944  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:15.820966  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:15.820980  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:15.853603  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:15.853634  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:15.905243  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:15.905280  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:16.769058  259325 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:14:17.427908  259325 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:14:17.428076  259325 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:14:17.701563  259325 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:14:17.897864  259325 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:14:17.978230  259325 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:14:18.126870  259325 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:14:18.386586  259325 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:14:18.387355  259325 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:14:18.392686  259325 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 25 09:14:07 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:07.264867262Z" level=info msg="Starting container: 5849ab46d744ff55a5e5ffb77dd02b3ef4cafb0b35ccac9aa7b628243a84c1d9" id=d1bc49b2-9f4c-4c90-9a60-e7b3b79c6d8d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:14:07 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:07.267349351Z" level=info msg="Started container" PID=1852 containerID=5849ab46d744ff55a5e5ffb77dd02b3ef4cafb0b35ccac9aa7b628243a84c1d9 description=kube-system/coredns-66bc5c9577-72zpn/coredns id=d1bc49b2-9f4c-4c90-9a60-e7b3b79c6d8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=abc9887c90abb2cc88e92f41dab9c0d7467217fbc595394cbbfbc8fe168ad628
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.54637703Z" level=info msg="Running pod sandbox: default/busybox/POD" id=15ba3092-8349-4e23-9014-868aeb8dc40e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.546472906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.551136359Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:68aa7cff79f7cb9958dd6a5dd47a4a922c4a873a5bd23ee1e3af789ae9787340 UID:2a8cbb66-d3e8-45f9-aa54-4adc15127a32 NetNS:/var/run/netns/df409e0d-dfc1-4d7f-99d1-70c335654dcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b070}] Aliases:map[]}"
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.551166944Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.56130138Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:68aa7cff79f7cb9958dd6a5dd47a4a922c4a873a5bd23ee1e3af789ae9787340 UID:2a8cbb66-d3e8-45f9-aa54-4adc15127a32 NetNS:/var/run/netns/df409e0d-dfc1-4d7f-99d1-70c335654dcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b070}] Aliases:map[]}"
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.561478983Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.56226516Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.563148656Z" level=info msg="Ran pod sandbox 68aa7cff79f7cb9958dd6a5dd47a4a922c4a873a5bd23ee1e3af789ae9787340 with infra container: default/busybox/POD" id=15ba3092-8349-4e23-9014-868aeb8dc40e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.564470164Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b8f956c-42ec-4cfc-8a5d-a29cc4ceadb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.564615256Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4b8f956c-42ec-4cfc-8a5d-a29cc4ceadb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.564676931Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4b8f956c-42ec-4cfc-8a5d-a29cc4ceadb3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.565406069Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=60214392-317a-4090-9bcf-eacc55fe9e26 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:14:10 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:10.568333303Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.279795015Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=60214392-317a-4090-9bcf-eacc55fe9e26 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.280546795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3c4a9a8-f6d0-4e08-a239-7da524c3f210 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.281952069Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f73765a2-66e0-4276-9c95-6598caa47925 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.285483499Z" level=info msg="Creating container: default/busybox/busybox" id=9a149203-96f3-4480-af18-f31e5988c4b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.285621617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.290725219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.291231028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.327047843Z" level=info msg="Created container 980a0eb2dfb2fad8263ac8c1c7642f57f3b2b41c592d2036497a874a62d88eac: default/busybox/busybox" id=9a149203-96f3-4480-af18-f31e5988c4b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.32784501Z" level=info msg="Starting container: 980a0eb2dfb2fad8263ac8c1c7642f57f3b2b41c592d2036497a874a62d88eac" id=8b581e2e-8a71-43e5-acd7-e9fe20b9eac8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:14:11 default-k8s-diff-port-891466 crio[779]: time="2025-10-25T09:14:11.330140718Z" level=info msg="Started container" PID=1929 containerID=980a0eb2dfb2fad8263ac8c1c7642f57f3b2b41c592d2036497a874a62d88eac description=default/busybox/busybox id=8b581e2e-8a71-43e5-acd7-e9fe20b9eac8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=68aa7cff79f7cb9958dd6a5dd47a4a922c4a873a5bd23ee1e3af789ae9787340
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	980a0eb2dfb2f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   68aa7cff79f7c       busybox                                                default
	5849ab46d744f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   abc9887c90abb       coredns-66bc5c9577-72zpn                               kube-system
	bdf7862423ed2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   2cfa90be67302       storage-provisioner                                    kube-system
	7772668aaa0ef       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   dc7784a244615       kindnet-9xc2z                                          kube-system
	6e65784549b47       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   4cfccbc215639       kube-proxy-rmqbr                                       kube-system
	ee228932fb4de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   a0824acd8f8f1       etcd-default-k8s-diff-port-891466                      kube-system
	17a3e49e0ac0b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   9b4a6d7af21f8       kube-apiserver-default-k8s-diff-port-891466            kube-system
	d3e077c77e70a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   bb02af7848612       kube-controller-manager-default-k8s-diff-port-891466   kube-system
	f034f38ec72b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   666d70fb926d4       kube-scheduler-default-k8s-diff-port-891466            kube-system
	
	
	==> coredns [5849ab46d744ff55a5e5ffb77dd02b3ef4cafb0b35ccac9aa7b628243a84c1d9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55222 - 34611 "HINFO IN 1907720298695484520.7803294373529156492. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.503706767s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-891466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-891466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=default-k8s-diff-port-891466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_13_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-891466
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:14:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:14:06 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:14:06 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:14:06 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:14:06 +0000   Sat, 25 Oct 2025 09:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-891466
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2fa36a04-64f2-4ad6-99cd-8fd412b795ce
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-72zpn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-891466                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-9xc2z                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-891466             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-891466    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-rmqbr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-891466             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-891466 event: Registered Node default-k8s-diff-port-891466 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-891466 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [ee228932fb4de18b5d221102ab7e64a9fae80366170417040eb6118603497778] <==
	{"level":"warn","ts":"2025-10-25T09:13:46.688383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.695420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.702351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.710033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.716269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.724961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.731633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.739103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.746118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.752848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.760385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.766615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.776697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.784617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.792252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.799001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.820009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.827316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.848691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.856183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.862489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.882182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.888862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.895258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:46.946855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53736","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:14:20 up 56 min,  0 user,  load average: 2.23, 3.01, 2.12
	Linux default-k8s-diff-port-891466 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7772668aaa0efcf407f0d3a2a36f7cc6cd87ccac7171eceb956bc2407f65eb9b] <==
	I1025 09:13:56.034447       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:13:56.034737       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:13:56.034914       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:13:56.034941       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:13:56.034976       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:13:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:13:56.331327       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:13:56.331364       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:13:56.331379       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:13:56.331908       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:13:56.732167       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:13:56.732196       1 metrics.go:72] Registering metrics
	I1025 09:13:56.732268       1 controller.go:711] "Syncing nftables rules"
	I1025 09:14:06.331857       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:14:06.331936       1 main.go:301] handling current node
	I1025 09:14:16.331846       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:14:16.331891       1 main.go:301] handling current node
	
	
	==> kube-apiserver [17a3e49e0ac0b2c4c82f157a0bdab43c6c29beaa4a2acce5c2c1064b09473c70] <==
	I1025 09:13:47.414072       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:13:47.417450       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:13:47.417676       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:47.421697       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:47.421789       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:13:47.428542       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:13:47.437928       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:13:48.317071       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:13:48.321040       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:13:48.321073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:13:48.837874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:13:48.880117       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:13:49.022681       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:13:49.028938       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 09:13:49.029987       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:13:49.034610       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:13:49.353106       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:13:50.123052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:13:50.133338       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:13:50.141341       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:13:54.358393       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:13:55.459017       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:55.465384       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:55.505852       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 09:14:18.341361       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:49030: use of closed network connection
	
	
	==> kube-controller-manager [d3e077c77e70a9229a95ec1a5d7af8873d16db7ff16d32d3551d5f4d39013ebb] <==
	I1025 09:13:54.352488       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:13:54.352580       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:13:54.352685       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-891466"
	I1025 09:13:54.352787       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:13:54.352958       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:13:54.352969       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:13:54.353292       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:13:54.353305       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:13:54.353322       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:13:54.353485       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:13:54.353844       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:13:54.354167       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:13:54.354364       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:13:54.356104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:13:54.356235       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:13:54.356538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:13:54.357487       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:13:54.357539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:13:54.357553       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:13:54.357844       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:13:54.360977       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:13:54.365498       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:13:54.373289       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:13:54.386347       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:14:09.354969       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6e65784549b47c3745e9129454c7e44f5fd19d39da04671a8dc3219fe586367e] <==
	I1025 09:13:55.944335       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:13:56.017009       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:13:56.117188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:13:56.117250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:13:56.117354       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:13:56.136562       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:13:56.136610       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:13:56.141670       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:13:56.142018       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:13:56.142034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:13:56.143052       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:13:56.143071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:13:56.143086       1 config.go:200] "Starting service config controller"
	I1025 09:13:56.143091       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:13:56.143124       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:13:56.143146       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:13:56.143189       1 config.go:309] "Starting node config controller"
	I1025 09:13:56.143210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:13:56.143223       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:13:56.243621       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:13:56.243692       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:13:56.243788       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f034f38ec72b6b2d8bfb1f6f0df3315b81a119bf1a13a6bdbb18b6c6f4f204d9] <==
	E1025 09:13:47.359783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:13:47.359794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:13:47.359871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:13:47.360246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:13:47.360305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:13:47.360285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:13:47.360305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:13:47.360398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:13:47.360435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:13:47.360467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:13:47.360536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:13:47.360537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:13:48.164349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:13:48.171723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:13:48.189351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:13:48.192781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:13:48.295447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:13:48.361055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:13:48.364247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:13:48.393509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:13:48.409670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:13:48.456955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:13:48.487365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:13:48.618148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1025 09:13:50.458047       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:13:50 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:50.998752    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-891466" podStartSLOduration=0.998732164 podStartE2EDuration="998.732164ms" podCreationTimestamp="2025-10-25 09:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:50.998618695 +0000 UTC m=+1.128730421" watchObservedRunningTime="2025-10-25 09:13:50.998732164 +0000 UTC m=+1.128843885"
	Oct 25 09:13:51 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:51.009129    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-891466" podStartSLOduration=1.009107672 podStartE2EDuration="1.009107672s" podCreationTimestamp="2025-10-25 09:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:51.009065713 +0000 UTC m=+1.139177438" watchObservedRunningTime="2025-10-25 09:13:51.009107672 +0000 UTC m=+1.139219398"
	Oct 25 09:13:51 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:51.030020    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-891466" podStartSLOduration=1.030003848 podStartE2EDuration="1.030003848s" podCreationTimestamp="2025-10-25 09:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:51.01982299 +0000 UTC m=+1.149934715" watchObservedRunningTime="2025-10-25 09:13:51.030003848 +0000 UTC m=+1.160115554"
	Oct 25 09:13:51 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:51.042633    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-891466" podStartSLOduration=1.04260759 podStartE2EDuration="1.04260759s" podCreationTimestamp="2025-10-25 09:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:51.030237281 +0000 UTC m=+1.160348997" watchObservedRunningTime="2025-10-25 09:13:51.04260759 +0000 UTC m=+1.172719313"
	Oct 25 09:13:54 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:54.383492    1339 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:13:54 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:54.384293    1339 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577248    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v46zs\" (UniqueName: \"kubernetes.io/projected/d20569e7-e7e7-4f55-a796-3b40a97b41cb-kube-api-access-v46zs\") pod \"kube-proxy-rmqbr\" (UID: \"d20569e7-e7e7-4f55-a796-3b40a97b41cb\") " pod="kube-system/kube-proxy-rmqbr"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577376    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/133978f9-4ef3-4e01-ba53-fdf702776a49-xtables-lock\") pod \"kindnet-9xc2z\" (UID: \"133978f9-4ef3-4e01-ba53-fdf702776a49\") " pod="kube-system/kindnet-9xc2z"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577427    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20569e7-e7e7-4f55-a796-3b40a97b41cb-xtables-lock\") pod \"kube-proxy-rmqbr\" (UID: \"d20569e7-e7e7-4f55-a796-3b40a97b41cb\") " pod="kube-system/kube-proxy-rmqbr"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577454    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20569e7-e7e7-4f55-a796-3b40a97b41cb-lib-modules\") pod \"kube-proxy-rmqbr\" (UID: \"d20569e7-e7e7-4f55-a796-3b40a97b41cb\") " pod="kube-system/kube-proxy-rmqbr"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577475    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/133978f9-4ef3-4e01-ba53-fdf702776a49-lib-modules\") pod \"kindnet-9xc2z\" (UID: \"133978f9-4ef3-4e01-ba53-fdf702776a49\") " pod="kube-system/kindnet-9xc2z"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577500    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8v6\" (UniqueName: \"kubernetes.io/projected/133978f9-4ef3-4e01-ba53-fdf702776a49-kube-api-access-xj8v6\") pod \"kindnet-9xc2z\" (UID: \"133978f9-4ef3-4e01-ba53-fdf702776a49\") " pod="kube-system/kindnet-9xc2z"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577533    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d20569e7-e7e7-4f55-a796-3b40a97b41cb-kube-proxy\") pod \"kube-proxy-rmqbr\" (UID: \"d20569e7-e7e7-4f55-a796-3b40a97b41cb\") " pod="kube-system/kube-proxy-rmqbr"
	Oct 25 09:13:55 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:55.577558    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/133978f9-4ef3-4e01-ba53-fdf702776a49-cni-cfg\") pod \"kindnet-9xc2z\" (UID: \"133978f9-4ef3-4e01-ba53-fdf702776a49\") " pod="kube-system/kindnet-9xc2z"
	Oct 25 09:13:56 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:56.008247    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9xc2z" podStartSLOduration=1.008224192 podStartE2EDuration="1.008224192s" podCreationTimestamp="2025-10-25 09:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:55.9963984 +0000 UTC m=+6.126510123" watchObservedRunningTime="2025-10-25 09:13:56.008224192 +0000 UTC m=+6.138335917"
	Oct 25 09:13:57 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:13:57.869277    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rmqbr" podStartSLOduration=2.869250865 podStartE2EDuration="2.869250865s" podCreationTimestamp="2025-10-25 09:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:56.008450795 +0000 UTC m=+6.138562521" watchObservedRunningTime="2025-10-25 09:13:57.869250865 +0000 UTC m=+7.999362591"
	Oct 25 09:14:06 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:06.866133    1339 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:14:06 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:06.961138    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522-tmp\") pod \"storage-provisioner\" (UID: \"64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522\") " pod="kube-system/storage-provisioner"
	Oct 25 09:14:06 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:06.961248    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntns\" (UniqueName: \"kubernetes.io/projected/64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522-kube-api-access-fntns\") pod \"storage-provisioner\" (UID: \"64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522\") " pod="kube-system/storage-provisioner"
	Oct 25 09:14:06 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:06.961293    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f0ca3b1-36e4-4471-862a-9eabfb9074aa-config-volume\") pod \"coredns-66bc5c9577-72zpn\" (UID: \"3f0ca3b1-36e4-4471-862a-9eabfb9074aa\") " pod="kube-system/coredns-66bc5c9577-72zpn"
	Oct 25 09:14:06 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:06.961353    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcl7f\" (UniqueName: \"kubernetes.io/projected/3f0ca3b1-36e4-4471-862a-9eabfb9074aa-kube-api-access-tcl7f\") pod \"coredns-66bc5c9577-72zpn\" (UID: \"3f0ca3b1-36e4-4471-862a-9eabfb9074aa\") " pod="kube-system/coredns-66bc5c9577-72zpn"
	Oct 25 09:14:08 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:08.035781    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.035747186 podStartE2EDuration="13.035747186s" podCreationTimestamp="2025-10-25 09:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:08.024709972 +0000 UTC m=+18.154821700" watchObservedRunningTime="2025-10-25 09:14:08.035747186 +0000 UTC m=+18.165858913"
	Oct 25 09:14:10 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:10.239719    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-72zpn" podStartSLOduration=15.239689894 podStartE2EDuration="15.239689894s" podCreationTimestamp="2025-10-25 09:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:08.036682278 +0000 UTC m=+18.166793999" watchObservedRunningTime="2025-10-25 09:14:10.239689894 +0000 UTC m=+20.369801621"
	Oct 25 09:14:10 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:10.283967    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbls\" (UniqueName: \"kubernetes.io/projected/2a8cbb66-d3e8-45f9-aa54-4adc15127a32-kube-api-access-5fbls\") pod \"busybox\" (UID: \"2a8cbb66-d3e8-45f9-aa54-4adc15127a32\") " pod="default/busybox"
	Oct 25 09:14:12 default-k8s-diff-port-891466 kubelet[1339]: I1025 09:14:12.038504    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.322091425 podStartE2EDuration="2.038482584s" podCreationTimestamp="2025-10-25 09:14:10 +0000 UTC" firstStartedPulling="2025-10-25 09:14:10.564991071 +0000 UTC m=+20.695102777" lastFinishedPulling="2025-10-25 09:14:11.281382217 +0000 UTC m=+21.411493936" observedRunningTime="2025-10-25 09:14:12.038383913 +0000 UTC m=+22.168495640" watchObservedRunningTime="2025-10-25 09:14:12.038482584 +0000 UTC m=+22.168594309"
	
	
	==> storage-provisioner [bdf7862423ed212712a8a45958aa6a30d2367c5987a3ebda2eb0776dcbec6ad5] <==
	I1025 09:14:07.275440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:14:07.287288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:14:07.287350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:14:07.290528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:07.299102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:14:07.299380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:14:07.299624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-891466_eb1b1898-4522-4467-9c83-3e20bf58901f!
	I1025 09:14:07.299888       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11fba150-462c-4200-a429-22a97d0e0933", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-891466_eb1b1898-4522-4467-9c83-3e20bf58901f became leader
	W1025 09:14:07.307286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:07.319013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:14:07.399983       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-891466_eb1b1898-4522-4467-9c83-3e20bf58901f!
	W1025 09:14:09.322022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:09.326779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:11.330607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:11.335953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:13.339354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:13.344252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:15.348255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:15.356223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:17.360157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:17.364244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:19.367546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:19.373504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (295.965279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:14:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-106968 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-106968 describe deploy/metrics-server -n kube-system: exit status 1 (67.764644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-106968 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-106968
helpers_test.go:243: (dbg) docker inspect embed-certs-106968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2",
	        "Created": "2025-10-25T09:13:06.160714175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:13:06.196163741Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/hosts",
	        "LogPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2-json.log",
	        "Name": "/embed-certs-106968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-106968:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-106968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2",
	                "LowerDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-106968",
	                "Source": "/var/lib/docker/volumes/embed-certs-106968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-106968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-106968",
	                "name.minikube.sigs.k8s.io": "embed-certs-106968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a51000060ebafc7607f68e70b517b07aad7d9e1058d1b87f5da4cf6e471f3f28",
	            "SandboxKey": "/var/run/docker/netns/a51000060eba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-106968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:e2:ab:1d:1e:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d58a21465e1439a449774f24fb5c5d02c9ed0fbccfcab14073246dc3e313836",
	                    "EndpointID": "37a10d564416e25ee73f81f8c7dacb3c2af45384044dcad0f6fef0d1f30ea2a8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-106968",
	                        "e1514b582330"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-106968 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-106968 logs -n 25: (1.187353331s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-959110 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ delete  │ -p missing-upgrade-047620                                                                                                                                                                                                                     │ missing-upgrade-047620       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:13 UTC │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:14:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:14:01.349429  259325 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:14:01.349695  259325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:01.349703  259325 out.go:374] Setting ErrFile to fd 2...
	I1025 09:14:01.349707  259325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:01.349881  259325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:14:01.350326  259325 out.go:368] Setting JSON to false
	I1025 09:14:01.351488  259325 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3389,"bootTime":1761380252,"procs":372,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:14:01.351566  259325 start.go:141] virtualization: kvm guest
	I1025 09:14:01.353581  259325 out.go:179] * [newest-cni-036155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:14:01.354862  259325 notify.go:220] Checking for updates...
	I1025 09:14:01.354911  259325 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:14:01.356248  259325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:14:01.357829  259325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:14:01.359191  259325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:14:01.360570  259325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:14:01.362056  259325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:14:01.363964  259325 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364078  259325 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364155  259325 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364286  259325 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:14:01.388723  259325 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:14:01.388851  259325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:14:01.446757  259325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:14:01.436278421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:14:01.446909  259325 docker.go:318] overlay module found
	I1025 09:14:01.448814  259325 out.go:179] * Using the docker driver based on user configuration
	I1025 09:14:01.449910  259325 start.go:305] selected driver: docker
	I1025 09:14:01.449923  259325 start.go:925] validating driver "docker" against <nil>
	I1025 09:14:01.449933  259325 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:14:01.450511  259325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:14:01.511090  259325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:14:01.500485086 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:14:01.511242  259325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 09:14:01.511267  259325 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 09:14:01.511481  259325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:14:01.513762  259325 out.go:179] * Using Docker driver with root privileges
	I1025 09:14:01.514937  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:01.515024  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:01.515037  259325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:14:01.515128  259325 start.go:349] cluster config:
	{Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:14:01.516524  259325 out.go:179] * Starting "newest-cni-036155" primary control-plane node in "newest-cni-036155" cluster
	I1025 09:14:01.517782  259325 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:14:01.518984  259325 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:14:01.520226  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:01.520270  259325 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:14:01.520295  259325 cache.go:58] Caching tarball of preloaded images
	I1025 09:14:01.520378  259325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:14:01.520391  259325 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:14:01.520490  259325 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:14:01.520629  259325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json ...
	I1025 09:14:01.520680  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json: {Name:mkbfe9b74fbf6dcc9fce3c2e514dd100d024d023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:01.542057  259325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:14:01.542076  259325 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:14:01.542091  259325 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:14:01.542116  259325 start.go:360] acquireMachinesLock for newest-cni-036155: {Name:mk5b9af4be10aaa846ed9c8c31160df3caae8c3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:14:01.542211  259325 start.go:364] duration metric: took 81.03µs to acquireMachinesLock for "newest-cni-036155"
	I1025 09:14:01.542235  259325 start.go:93] Provisioning new machine with config: &{Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:14:01.542374  259325 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:13:58.278667  225660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.058221196s)
	W1025 09:13:58.278709  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1025 09:13:58.278726  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:13:58.278748  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:13:58.315063  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:58.315094  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:58.352625  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:58.352693  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:58.381187  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:58.381214  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:58.436157  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:58.436186  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:58.492499  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:58.492535  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:58.528534  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:58.528568  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:58.632433  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:58.632471  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:01.149149  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:02.578502  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:57860->192.168.85.2:8443: read: connection reset by peer
	I1025 09:14:02.578582  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:02.578671  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:02.612993  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:02.613015  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:14:02.613019  225660 cri.go:89] found id: ""
	I1025 09:14:02.613026  225660 logs.go:282] 2 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:14:02.613087  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.617248  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.621187  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:02.621252  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:02.651262  225660 cri.go:89] found id: ""
	I1025 09:14:02.651292  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.651304  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:02.651315  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:02.651375  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:02.680223  225660 cri.go:89] found id: ""
	I1025 09:14:02.680246  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.680255  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:02.680261  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:02.680304  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:02.708376  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:02.708400  225660 cri.go:89] found id: ""
	I1025 09:14:02.708419  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:02.708470  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.712497  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:02.712567  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:02.743096  225660 cri.go:89] found id: ""
	I1025 09:14:02.743123  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.743135  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:02.743142  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:02.743189  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:02.776405  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:02.776424  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:02.776428  225660 cri.go:89] found id: ""
	I1025 09:14:02.776435  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:02.776494  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.780906  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.784758  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:02.784832  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1025 09:13:59.435798  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:01.935111  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:01.271430  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:14:03.770895  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:14:01.544621  259325 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:14:01.544863  259325 start.go:159] libmachine.API.Create for "newest-cni-036155" (driver="docker")
	I1025 09:14:01.544898  259325 client.go:168] LocalClient.Create starting
	I1025 09:14:01.544971  259325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:14:01.545008  259325 main.go:141] libmachine: Decoding PEM data...
	I1025 09:14:01.545033  259325 main.go:141] libmachine: Parsing certificate...
	I1025 09:14:01.545103  259325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:14:01.545131  259325 main.go:141] libmachine: Decoding PEM data...
	I1025 09:14:01.545157  259325 main.go:141] libmachine: Parsing certificate...
	I1025 09:14:01.545523  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:14:01.564874  259325 cli_runner.go:211] docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:14:01.564937  259325 network_create.go:284] running [docker network inspect newest-cni-036155] to gather additional debugging logs...
	I1025 09:14:01.564956  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155
	W1025 09:14:01.582897  259325 cli_runner.go:211] docker network inspect newest-cni-036155 returned with exit code 1
	I1025 09:14:01.582929  259325 network_create.go:287] error running [docker network inspect newest-cni-036155]: docker network inspect newest-cni-036155: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-036155 not found
	I1025 09:14:01.582945  259325 network_create.go:289] output of [docker network inspect newest-cni-036155]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-036155 not found
	
	** /stderr **
	I1025 09:14:01.583104  259325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:14:01.601343  259325 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:14:01.602058  259325 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:14:01.602766  259325 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:14:01.603404  259325 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:14:01.603905  259325 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9aa42478a513 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:4e:f8:f5:5b:2e} reservation:<nil>}
	I1025 09:14:01.604415  259325 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5d58a21465e1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4e:78:a8:09:a3:02} reservation:<nil>}
	I1025 09:14:01.605183  259325 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb4940}
	I1025 09:14:01.605204  259325 network_create.go:124] attempt to create docker network newest-cni-036155 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:14:01.605249  259325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-036155 newest-cni-036155
	I1025 09:14:01.664530  259325 network_create.go:108] docker network newest-cni-036155 192.168.103.0/24 created
	I1025 09:14:01.664563  259325 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-036155" container
	I1025 09:14:01.664653  259325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:14:01.684160  259325 cli_runner.go:164] Run: docker volume create newest-cni-036155 --label name.minikube.sigs.k8s.io=newest-cni-036155 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:14:01.703110  259325 oci.go:103] Successfully created a docker volume newest-cni-036155
	I1025 09:14:01.703199  259325 cli_runner.go:164] Run: docker run --rm --name newest-cni-036155-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-036155 --entrypoint /usr/bin/test -v newest-cni-036155:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:14:02.100402  259325 oci.go:107] Successfully prepared a docker volume newest-cni-036155
	I1025 09:14:02.100450  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:02.100473  259325 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:14:02.100556  259325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-036155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:14:02.813543  225660 cri.go:89] found id: ""
	I1025 09:14:02.813571  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.813581  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:02.813588  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:02.813668  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:02.843013  225660 cri.go:89] found id: ""
	I1025 09:14:02.843039  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.843049  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:02.843065  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:02.843079  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:02.858191  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:14:02.858224  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:14:02.894345  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:02.894398  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:02.924538  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:02.924566  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:02.981267  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:02.981304  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:03.096416  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:03.096461  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:03.168015  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:03.168040  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:03.168054  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:03.205969  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:03.206012  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:03.271485  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:03.271526  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:03.300749  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:03.300783  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:05.840548  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:05.841022  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:05.841081  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:05.841139  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:05.869264  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:05.869286  225660 cri.go:89] found id: ""
	I1025 09:14:05.869293  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:05.869340  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:05.873358  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:05.873414  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:05.901366  225660 cri.go:89] found id: ""
	I1025 09:14:05.901395  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.901406  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:05.901413  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:05.901467  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:05.931032  225660 cri.go:89] found id: ""
	I1025 09:14:05.931059  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.931069  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:05.931076  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:05.931142  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:05.959495  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:05.959515  225660 cri.go:89] found id: ""
	I1025 09:14:05.959523  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:05.959567  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:05.963756  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:05.963826  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:05.991899  225660 cri.go:89] found id: ""
	I1025 09:14:05.991925  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.991943  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:05.991953  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:05.992018  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:06.019791  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:06.019811  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:06.019815  225660 cri.go:89] found id: ""
	I1025 09:14:06.019822  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:06.019886  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:06.024190  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:06.028096  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:06.028161  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:06.055987  225660 cri.go:89] found id: ""
	I1025 09:14:06.056018  225660 logs.go:282] 0 containers: []
	W1025 09:14:06.056029  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:06.056035  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:06.056090  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:06.083950  225660 cri.go:89] found id: ""
	I1025 09:14:06.083976  225660 logs.go:282] 0 containers: []
	W1025 09:14:06.083987  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:06.084004  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:06.084019  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:06.110553  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:06.110582  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:06.164204  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:06.164238  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:06.253207  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:06.253241  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:06.313928  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:06.313953  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:06.313968  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:06.346421  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:06.346466  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:06.361467  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:06.361496  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:06.393406  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:06.393444  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:06.444918  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:06.444948  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	W1025 09:14:03.935273  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:06.435636  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	I1025 09:14:06.935349  253344 node_ready.go:49] node "default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:06.935378  253344 node_ready.go:38] duration metric: took 11.503747191s for node "default-k8s-diff-port-891466" to be "Ready" ...
	I1025 09:14:06.935390  253344 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:06.935479  253344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:06.948160  253344 api_server.go:72] duration metric: took 11.823550151s to wait for apiserver process to appear ...
	I1025 09:14:06.948193  253344 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:06.948215  253344 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:14:06.953340  253344 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:14:06.954553  253344 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:06.954586  253344 api_server.go:131] duration metric: took 6.384823ms to wait for apiserver health ...
	I1025 09:14:06.954598  253344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:06.958083  253344 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:06.958116  253344 system_pods.go:61] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:06.958122  253344 system_pods.go:61] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:06.958130  253344 system_pods.go:61] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:06.958135  253344 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:06.958140  253344 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:06.958151  253344 system_pods.go:61] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:06.958156  253344 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:06.958167  253344 system_pods.go:61] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:06.958175  253344 system_pods.go:74] duration metric: took 3.569351ms to wait for pod list to return data ...
	I1025 09:14:06.958188  253344 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:06.960663  253344 default_sa.go:45] found service account: "default"
	I1025 09:14:06.960687  253344 default_sa.go:55] duration metric: took 2.491182ms for default service account to be created ...
	I1025 09:14:06.960698  253344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:14:06.963911  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:06.963945  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:06.963955  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:06.963967  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:06.963974  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:06.963981  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:06.963989  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:06.964176  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:06.964191  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:06.964221  253344 retry.go:31] will retry after 290.946821ms: missing components: kube-dns
	I1025 09:14:07.261256  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.261299  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.261308  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.261319  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.261325  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.261331  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.261372  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.261383  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.261392  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.261412  253344 retry.go:31] will retry after 251.1932ms: missing components: kube-dns
	I1025 09:14:07.516457  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.516488  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.516494  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.516500  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.516504  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.516508  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.516512  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.516517  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.516524  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.516541  253344 retry.go:31] will retry after 312.108611ms: missing components: kube-dns
	I1025 09:14:07.832521  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.832555  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.832561  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.832567  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.832573  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.832577  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.832580  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.832584  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.832591  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.832610  253344 retry.go:31] will retry after 578.903074ms: missing components: kube-dns
	I1025 09:14:08.416051  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.416084  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Running
	I1025 09:14:08.416092  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:08.416099  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:08.416104  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:08.416109  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:08.416113  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:08.416116  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:08.416121  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Running
	I1025 09:14:08.416131  253344 system_pods.go:126] duration metric: took 1.455426427s to wait for k8s-apps to be running ...
	I1025 09:14:08.416145  253344 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:14:08.416197  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:14:08.429617  253344 system_svc.go:56] duration metric: took 13.46202ms WaitForService to wait for kubelet
	I1025 09:14:08.429689  253344 kubeadm.go:586] duration metric: took 13.305083699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:14:08.429711  253344 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:08.432623  253344 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:08.432665  253344 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:08.432680  253344 node_conditions.go:105] duration metric: took 2.964083ms to run NodePressure ...
	I1025 09:14:08.432693  253344 start.go:241] waiting for startup goroutines ...
	I1025 09:14:08.432702  253344 start.go:246] waiting for cluster config update ...
	I1025 09:14:08.432717  253344 start.go:255] writing updated cluster config ...
	I1025 09:14:08.432974  253344 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:08.436927  253344 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:08.440402  253344 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.444662  253344 pod_ready.go:94] pod "coredns-66bc5c9577-72zpn" is "Ready"
	I1025 09:14:08.444683  253344 pod_ready.go:86] duration metric: took 4.260186ms for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.446669  253344 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.450415  253344 pod_ready.go:94] pod "etcd-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.450440  253344 pod_ready.go:86] duration metric: took 3.750274ms for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.452271  253344 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.455682  253344 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.455704  253344 pod_ready.go:86] duration metric: took 3.413528ms for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.457512  253344 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:14:05.771472  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:14:08.271104  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:14:08.770948  247074 node_ready.go:49] node "embed-certs-106968" is "Ready"
	I1025 09:14:08.770978  247074 node_ready.go:38] duration metric: took 41.503136723s for node "embed-certs-106968" to be "Ready" ...
	I1025 09:14:08.770991  247074 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:08.771040  247074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:08.786566  247074 api_server.go:72] duration metric: took 41.819658043s to wait for apiserver process to appear ...
	I1025 09:14:08.786597  247074 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:08.786620  247074 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 09:14:08.791819  247074 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 09:14:08.792653  247074 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:08.792675  247074 api_server.go:131] duration metric: took 6.071281ms to wait for apiserver health ...
	I1025 09:14:08.792683  247074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:08.796024  247074 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:08.796066  247074 system_pods.go:61] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.796076  247074 system_pods.go:61] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.796088  247074 system_pods.go:61] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.796094  247074 system_pods.go:61] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.796103  247074 system_pods.go:61] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.796108  247074 system_pods.go:61] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.796114  247074 system_pods.go:61] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.796119  247074 system_pods.go:61] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.796133  247074 system_pods.go:74] duration metric: took 3.442989ms to wait for pod list to return data ...
	I1025 09:14:08.796148  247074 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:08.798369  247074 default_sa.go:45] found service account: "default"
	I1025 09:14:08.798387  247074 default_sa.go:55] duration metric: took 2.229844ms for default service account to be created ...
	I1025 09:14:08.798394  247074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:14:08.801058  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.801082  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.801088  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.801093  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.801096  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.801100  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.801104  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.801107  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.801112  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.801132  247074 retry.go:31] will retry after 190.781972ms: missing components: kube-dns
	I1025 09:14:08.995887  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.995925  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.995933  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.995941  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.995947  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.995954  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.995959  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.995966  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.995974  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.995996  247074 retry.go:31] will retry after 247.582365ms: missing components: kube-dns
	I1025 09:14:09.247882  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:09.247915  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:09.247921  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:09.247927  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:09.247931  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:09.247935  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:09.247940  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:09.247944  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:09.247949  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:09.247963  247074 retry.go:31] will retry after 418.536389ms: missing components: kube-dns
	I1025 09:14:09.670936  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:09.670969  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Running
	I1025 09:14:09.670977  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:09.670983  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:09.670988  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:09.670993  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:09.670998  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:09.671006  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:09.671011  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Running
	I1025 09:14:09.671021  247074 system_pods.go:126] duration metric: took 872.62006ms to wait for k8s-apps to be running ...
	I1025 09:14:09.671033  247074 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:14:09.671082  247074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:14:09.684149  247074 system_svc.go:56] duration metric: took 13.109824ms WaitForService to wait for kubelet
	I1025 09:14:09.684176  247074 kubeadm.go:586] duration metric: took 42.717274637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:14:09.684197  247074 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:09.687014  247074 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:09.687037  247074 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:09.687050  247074 node_conditions.go:105] duration metric: took 2.847789ms to run NodePressure ...
	I1025 09:14:09.687060  247074 start.go:241] waiting for startup goroutines ...
	I1025 09:14:09.687067  247074 start.go:246] waiting for cluster config update ...
	I1025 09:14:09.687077  247074 start.go:255] writing updated cluster config ...
	I1025 09:14:09.687328  247074 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:09.691103  247074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:09.694610  247074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.698843  247074 pod_ready.go:94] pod "coredns-66bc5c9577-dx4j4" is "Ready"
	I1025 09:14:09.698866  247074 pod_ready.go:86] duration metric: took 4.23265ms for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.700733  247074 pod_ready.go:83] waiting for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.704283  247074 pod_ready.go:94] pod "etcd-embed-certs-106968" is "Ready"
	I1025 09:14:09.704303  247074 pod_ready.go:86] duration metric: took 3.551149ms for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.706066  247074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.709547  247074 pod_ready.go:94] pod "kube-apiserver-embed-certs-106968" is "Ready"
	I1025 09:14:09.709564  247074 pod_ready.go:86] duration metric: took 3.482629ms for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.711117  247074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.840767  253344 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.840794  253344 pod_ready.go:86] duration metric: took 383.263633ms for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.041420  253344 pod_ready.go:83] waiting for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.441977  253344 pod_ready.go:94] pod "kube-proxy-rmqbr" is "Ready"
	I1025 09:14:09.442007  253344 pod_ready.go:86] duration metric: took 400.561652ms for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.641678  253344 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.041042  253344 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:10.041068  253344 pod_ready.go:86] duration metric: took 399.361298ms for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.041080  253344 pod_ready.go:40] duration metric: took 1.604125716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:10.083846  253344 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:10.085911  253344 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-891466" cluster and "default" namespace by default
	I1025 09:14:10.095667  247074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-106968" is "Ready"
	I1025 09:14:10.095699  247074 pod_ready.go:86] duration metric: took 384.564763ms for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.296396  247074 pod_ready.go:83] waiting for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.695915  247074 pod_ready.go:94] pod "kube-proxy-sm8hw" is "Ready"
	I1025 09:14:10.695940  247074 pod_ready.go:86] duration metric: took 399.512784ms for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.895258  247074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:11.295963  247074 pod_ready.go:94] pod "kube-scheduler-embed-certs-106968" is "Ready"
	I1025 09:14:11.295996  247074 pod_ready.go:86] duration metric: took 400.705834ms for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:11.296011  247074 pod_ready.go:40] duration metric: took 1.604868452s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:11.348313  247074 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:06.610431  259325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-036155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.50981258s)
	I1025 09:14:06.610467  259325 kic.go:203] duration metric: took 4.509989969s to extract preloaded images to volume ...
	W1025 09:14:06.610587  259325 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:14:06.610634  259325 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:14:06.610712  259325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:14:06.666144  259325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-036155 --name newest-cni-036155 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-036155 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-036155 --network newest-cni-036155 --ip 192.168.103.2 --volume newest-cni-036155:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:14:06.972900  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Running}}
	I1025 09:14:06.993336  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.013258  259325 cli_runner.go:164] Run: docker exec newest-cni-036155 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:14:07.057407  259325 oci.go:144] the created container "newest-cni-036155" has a running status.
	I1025 09:14:07.057438  259325 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa...
	I1025 09:14:07.113913  259325 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:14:07.147153  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.167068  259325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:14:07.167088  259325 kic_runner.go:114] Args: [docker exec --privileged newest-cni-036155 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:14:07.214916  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.241483  259325 machine.go:93] provisionDockerMachine start ...
	I1025 09:14:07.241575  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:07.268234  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:07.268673  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:07.268698  259325 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:14:07.269464  259325 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37304->127.0.0.1:33085: read: connection reset by peer
	I1025 09:14:10.411580  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-036155
	
	I1025 09:14:10.411618  259325 ubuntu.go:182] provisioning hostname "newest-cni-036155"
	I1025 09:14:10.411703  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:10.430482  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:10.430731  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:10.430747  259325 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-036155 && echo "newest-cni-036155" | sudo tee /etc/hostname
	I1025 09:14:10.585307  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-036155
	
	I1025 09:14:10.585419  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:10.606084  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:10.606313  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:10.606331  259325 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-036155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-036155/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-036155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:14:10.747795  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:14:10.747824  259325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:14:10.747864  259325 ubuntu.go:190] setting up certificates
	I1025 09:14:10.747881  259325 provision.go:84] configureAuth start
	I1025 09:14:10.747955  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:10.766485  259325 provision.go:143] copyHostCerts
	I1025 09:14:10.766572  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:14:10.766587  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:14:10.766695  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:14:10.766836  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:14:10.766852  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:14:10.766897  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:14:10.766999  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:14:10.767008  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:14:10.767046  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:14:10.767144  259325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.newest-cni-036155 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-036155]
	I1025 09:14:11.350247  247074 out.go:179] * Done! kubectl is now configured to use "embed-certs-106968" cluster and "default" namespace by default
	I1025 09:14:08.972298  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:08.972739  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:08.972796  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:08.972855  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:09.003134  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:09.003160  225660 cri.go:89] found id: ""
	I1025 09:14:09.003170  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:09.003229  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.007677  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:09.007750  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:09.038302  225660 cri.go:89] found id: ""
	I1025 09:14:09.038326  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.038335  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:09.038341  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:09.038431  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:09.066635  225660 cri.go:89] found id: ""
	I1025 09:14:09.066680  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.066692  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:09.066698  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:09.066754  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:09.093560  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:09.093582  225660 cri.go:89] found id: ""
	I1025 09:14:09.093591  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:09.093678  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.097667  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:09.097735  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:09.124755  225660 cri.go:89] found id: ""
	I1025 09:14:09.124779  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.124787  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:09.124792  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:09.124838  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:09.151173  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:09.151200  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:09.151206  225660 cri.go:89] found id: ""
	I1025 09:14:09.151216  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:09.151274  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.155517  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.159318  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:09.159371  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:09.185902  225660 cri.go:89] found id: ""
	I1025 09:14:09.185929  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.185937  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:09.185942  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:09.185990  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:09.213382  225660 cri.go:89] found id: ""
	I1025 09:14:09.213406  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.213414  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:09.213427  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:09.213437  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:09.227962  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:09.227989  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:09.286897  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:09.286914  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:09.286930  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:09.344244  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:09.344280  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:09.372387  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:09.372412  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:09.404393  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:09.404442  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:09.445740  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:09.445773  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:09.473530  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:09.473557  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:09.530325  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:09.530359  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:12.126696  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:12.127001  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:12.127041  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:12.127078  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:12.156258  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:12.156278  225660 cri.go:89] found id: ""
	I1025 09:14:12.156286  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:12.156333  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.160830  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:12.160899  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:12.189251  225660 cri.go:89] found id: ""
	I1025 09:14:12.189276  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.189284  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:12.189291  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:12.189345  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:12.218011  225660 cri.go:89] found id: ""
	I1025 09:14:12.218040  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.218051  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:12.218058  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:12.218110  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:12.246768  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:12.246792  225660 cri.go:89] found id: ""
	I1025 09:14:12.246800  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:12.246849  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.250850  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:12.250911  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:12.279387  225660 cri.go:89] found id: ""
	I1025 09:14:12.279415  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.279430  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:12.279435  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:12.279493  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:12.309764  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:12.309788  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:12.309794  225660 cri.go:89] found id: ""
	I1025 09:14:12.309803  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:12.309858  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.314431  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.318673  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:12.318743  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:12.348251  225660 cri.go:89] found id: ""
	I1025 09:14:12.348282  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.348293  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:12.348301  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:12.348354  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:12.376469  225660 cri.go:89] found id: ""
	I1025 09:14:12.376500  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.376517  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:12.376532  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:12.376543  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:12.481987  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:12.482020  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:12.501685  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:12.501719  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:12.561742  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:12.561777  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:12.595479  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:12.595510  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:12.657485  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:12.657516  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:12.724018  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:12.724046  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:12.724063  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:12.758682  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:12.758719  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:11.510510  259325 provision.go:177] copyRemoteCerts
	I1025 09:14:11.510574  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:14:11.510609  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.528759  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:11.630293  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:14:11.649620  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:14:11.667356  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:14:11.684854  259325 provision.go:87] duration metric: took 936.957621ms to configureAuth
	I1025 09:14:11.684892  259325 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:14:11.685064  259325 config.go:182] Loaded profile config "newest-cni-036155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:11.685161  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.703806  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:11.704008  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:11.704026  259325 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:14:11.968181  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:14:11.968209  259325 machine.go:96] duration metric: took 4.726701907s to provisionDockerMachine
	I1025 09:14:11.968221  259325 client.go:171] duration metric: took 10.423315226s to LocalClient.Create
	I1025 09:14:11.968243  259325 start.go:167] duration metric: took 10.423381733s to libmachine.API.Create "newest-cni-036155"
	I1025 09:14:11.968252  259325 start.go:293] postStartSetup for "newest-cni-036155" (driver="docker")
	I1025 09:14:11.968273  259325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:14:11.968342  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:14:11.968382  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.988313  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.091847  259325 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:14:12.096150  259325 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:14:12.096175  259325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:14:12.096187  259325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:14:12.096246  259325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:14:12.096338  259325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:14:12.096472  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:14:12.104581  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:14:12.125866  259325 start.go:296] duration metric: took 157.598101ms for postStartSetup
	I1025 09:14:12.126207  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:12.145205  259325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json ...
	I1025 09:14:12.145547  259325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:14:12.145602  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.166198  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.265965  259325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:14:12.271045  259325 start.go:128] duration metric: took 10.728656434s to createHost
	I1025 09:14:12.271079  259325 start.go:83] releasing machines lock for "newest-cni-036155", held for 10.728853828s
	I1025 09:14:12.271157  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:12.292688  259325 ssh_runner.go:195] Run: cat /version.json
	I1025 09:14:12.292723  259325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:14:12.292742  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.292793  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.314352  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.314667  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.483055  259325 ssh_runner.go:195] Run: systemctl --version
	I1025 09:14:12.490540  259325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:14:12.536231  259325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:14:12.541807  259325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:14:12.541870  259325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:14:12.571901  259325 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:14:12.571931  259325 start.go:495] detecting cgroup driver to use...
	I1025 09:14:12.571966  259325 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:14:12.572017  259325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:14:12.596449  259325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:14:12.611557  259325 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:14:12.611628  259325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:14:12.630533  259325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:14:12.648087  259325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:14:12.736517  259325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:14:12.839188  259325 docker.go:234] disabling docker service ...
	I1025 09:14:12.839286  259325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:14:12.859123  259325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:14:12.873528  259325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:14:12.959727  259325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:14:13.046275  259325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:14:13.059833  259325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:14:13.074282  259325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:14:13.074351  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.085056  259325 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:14:13.085131  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.094564  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.103436  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.112411  259325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:14:13.120618  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.129243  259325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.143332  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.152512  259325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:14:13.160145  259325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:14:13.167921  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:13.247586  259325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:14:13.369361  259325 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:14:13.369432  259325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:14:13.373738  259325 start.go:563] Will wait 60s for crictl version
	I1025 09:14:13.373798  259325 ssh_runner.go:195] Run: which crictl
	I1025 09:14:13.377873  259325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:14:13.402547  259325 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:14:13.402629  259325 ssh_runner.go:195] Run: crio --version
	I1025 09:14:13.435875  259325 ssh_runner.go:195] Run: crio --version
	I1025 09:14:13.466340  259325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:14:13.467881  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:14:13.486741  259325 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:14:13.491163  259325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:14:13.503996  259325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:14:13.505132  259325 kubeadm.go:883] updating cluster {Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:14:13.505308  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:13.505385  259325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:14:13.537110  259325 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:14:13.537138  259325 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:14:13.537208  259325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:14:13.565601  259325 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:14:13.565629  259325 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:14:13.565668  259325 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:14:13.565770  259325 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-036155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:14:13.565852  259325 ssh_runner.go:195] Run: crio config
	I1025 09:14:13.613362  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:13.613386  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:13.613402  259325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:14:13.613423  259325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-036155 NodeName:newest-cni-036155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:14:13.613560  259325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-036155"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:14:13.613625  259325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:14:13.621734  259325 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:14:13.621798  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:14:13.629658  259325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:14:13.642503  259325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:14:13.657918  259325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:14:13.670798  259325 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:14:13.674428  259325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:14:13.684203  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:13.764843  259325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:14:13.785140  259325 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155 for IP: 192.168.103.2
	I1025 09:14:13.785167  259325 certs.go:195] generating shared ca certs ...
	I1025 09:14:13.785187  259325 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:13.785344  259325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:14:13.785395  259325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:14:13.785408  259325 certs.go:257] generating profile certs ...
	I1025 09:14:13.785477  259325 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key
	I1025 09:14:13.785494  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt with IP's: []
	I1025 09:14:14.040562  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt ...
	I1025 09:14:14.040589  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt: {Name:mk646b8f9783dd9e4707890963ea7e898faa4fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.040796  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key ...
	I1025 09:14:14.040814  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key: {Name:mkc53418ebf76ccde9e19bfb0999b44fd01a281b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.040936  259325 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f
	I1025 09:14:14.040955  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:14:14.178872  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f ...
	I1025 09:14:14.178902  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f: {Name:mk6d40b7bebb79f6059b96eb77ffd7cc4e3645e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.179108  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f ...
	I1025 09:14:14.179126  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f: {Name:mkbc6d5a1a1415943f145cdf28bbee21fccbc4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.179228  259325 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt
	I1025 09:14:14.179331  259325 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key
	I1025 09:14:14.179401  259325 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key
	I1025 09:14:14.179419  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt with IP's: []
	I1025 09:14:14.456160  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt ...
	I1025 09:14:14.456187  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt: {Name:mk6afabad4b505221210ee1843d1e445e48419a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.456387  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key ...
	I1025 09:14:14.456405  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key: {Name:mk8b837757d816131e1957def20b89352fbd6a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.456615  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:14:14.456680  259325 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:14:14.456693  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:14:14.456721  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:14:14.456755  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:14:14.456784  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:14:14.456839  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:14:14.457424  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:14:14.475888  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:14:14.494249  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:14:14.512460  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:14:14.530466  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:14:14.550613  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:14:14.569632  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:14:14.588778  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:14:14.607411  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:14:14.627793  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:14:14.645515  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:14:14.662990  259325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:14:14.675778  259325 ssh_runner.go:195] Run: openssl version
	I1025 09:14:14.682117  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:14:14.690896  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.694728  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.694786  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.729366  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:14:14.738443  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:14:14.747390  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.751269  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.751325  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.787080  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:14:14.797279  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:14:14.806185  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.809958  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.810016  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.844458  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:14:14.853408  259325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:14:14.857106  259325 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:14:14.857170  259325 kubeadm.go:400] StartCluster: {Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:14:14.857267  259325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:14:14.857318  259325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:14:14.885203  259325 cri.go:89] found id: ""
	I1025 09:14:14.885275  259325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:14:14.894314  259325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:14:14.902526  259325 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:14:14.902581  259325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:14:14.910548  259325 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:14:14.910567  259325 kubeadm.go:157] found existing configuration files:
	
	I1025 09:14:14.910606  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:14:14.918559  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:14:14.918617  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:14:14.926037  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:14:14.933744  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:14:14.933812  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:14:14.941147  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:14:14.949023  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:14:14.949074  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:14:14.956352  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:14:14.963871  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:14:14.963917  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:14:14.971281  259325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:14:15.012944  259325 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:14:15.013018  259325 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:14:15.034481  259325 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:14:15.034629  259325 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:14:15.034715  259325 kubeadm.go:318] OS: Linux
	I1025 09:14:15.034799  259325 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:14:15.034865  259325 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:14:15.034941  259325 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:14:15.035026  259325 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:14:15.035104  259325 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:14:15.035174  259325 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:14:15.035234  259325 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:14:15.035306  259325 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:14:15.095588  259325 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:14:15.095759  259325 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:14:15.095880  259325 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:14:15.103017  259325 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:14:15.106084  259325 out.go:252]   - Generating certificates and keys ...
	I1025 09:14:15.106182  259325 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:14:15.106260  259325 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:14:15.271964  259325 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:14:15.313276  259325 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:14:15.508442  259325 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:14:15.535170  259325 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:14:15.844944  259325 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:14:15.845122  259325 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-036155] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:14:16.013299  259325 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:14:16.013491  259325 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-036155] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:14:16.266960  259325 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:14:12.796156  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:12.796183  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:15.330709  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:15.331131  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:15.331185  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:15.331257  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:15.361721  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:15.361747  225660 cri.go:89] found id: ""
	I1025 09:14:15.361757  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:15.361820  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.366052  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:15.366106  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:15.393921  225660 cri.go:89] found id: ""
	I1025 09:14:15.393946  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.393953  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:15.393958  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:15.394003  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:15.421456  225660 cri.go:89] found id: ""
	I1025 09:14:15.421483  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.421494  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:15.421501  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:15.421566  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:15.449595  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:15.449622  225660 cri.go:89] found id: ""
	I1025 09:14:15.449631  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:15.449706  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.453889  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:15.453971  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:15.481414  225660 cri.go:89] found id: ""
	I1025 09:14:15.481440  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.481450  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:15.481458  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:15.481532  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:15.509346  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:15.509385  225660 cri.go:89] found id: ""
	I1025 09:14:15.509395  225660 logs.go:282] 1 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692]
	I1025 09:14:15.509452  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.513693  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:15.513759  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:15.540722  225660 cri.go:89] found id: ""
	I1025 09:14:15.540753  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.540765  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:15.540772  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:15.540828  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:15.569576  225660 cri.go:89] found id: ""
	I1025 09:14:15.569607  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.569618  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:15.569630  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:15.569659  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:15.625756  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:15.625804  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:15.657463  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:15.657491  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:15.745931  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:15.745976  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:15.761570  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:15.761599  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:15.820944  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:15.820966  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:15.820980  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:15.853603  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:15.853634  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:15.905243  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:15.905280  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:16.769058  259325 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:14:17.427908  259325 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:14:17.428076  259325 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:14:17.701563  259325 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:14:17.897864  259325 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:14:17.978230  259325 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:14:18.126870  259325 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:14:18.386586  259325 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:14:18.387355  259325 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:14:18.392686  259325 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 25 09:14:08 embed-certs-106968 crio[771]: time="2025-10-25T09:14:08.734300954Z" level=info msg="Starting container: 5911f9c5cdf18133681f5cf989145599cbcaae783f1f319f06e52ff11166b1ea" id=fd103706-f9ae-40b7-bf80-100ae1c1d58b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:14:08 embed-certs-106968 crio[771]: time="2025-10-25T09:14:08.73635214Z" level=info msg="Started container" PID=1842 containerID=5911f9c5cdf18133681f5cf989145599cbcaae783f1f319f06e52ff11166b1ea description=kube-system/coredns-66bc5c9577-dx4j4/coredns id=fd103706-f9ae-40b7-bf80-100ae1c1d58b name=/runtime.v1.RuntimeService/StartContainer sandboxID=d7fe03b5042f5403a5f9fda64cb82b09245ed470b9e3882bd81d710ab5086060
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.796908112Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3566cecc-0da4-4f44-a19e-84a7d150ee2e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.796994977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.801283974Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5b380c41ed81ada45378bdbb29a12c884a6cf2eb56a53e495f195d4d9a98576d UID:05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe NetNS:/var/run/netns/85096f91-016a-4652-941b-9c3f042c3e88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c8a2b0}] Aliases:map[]}"
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.801318207Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.81111443Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5b380c41ed81ada45378bdbb29a12c884a6cf2eb56a53e495f195d4d9a98576d UID:05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe NetNS:/var/run/netns/85096f91-016a-4652-941b-9c3f042c3e88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c8a2b0}] Aliases:map[]}"
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.811247403Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.812163734Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.812976785Z" level=info msg="Ran pod sandbox 5b380c41ed81ada45378bdbb29a12c884a6cf2eb56a53e495f195d4d9a98576d with infra container: default/busybox/POD" id=3566cecc-0da4-4f44-a19e-84a7d150ee2e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.814260105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ffdca837-5265-49f0-83ed-5f6787159cdd name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.814404378Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ffdca837-5265-49f0-83ed-5f6787159cdd name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.814440445Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ffdca837-5265-49f0-83ed-5f6787159cdd name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.815172423Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c102526-47b7-492f-96b8-dcc97a17d71a name=/runtime.v1.ImageService/PullImage
	Oct 25 09:14:11 embed-certs-106968 crio[771]: time="2025-10-25T09:14:11.818083774Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.567858154Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8c102526-47b7-492f-96b8-dcc97a17d71a name=/runtime.v1.ImageService/PullImage
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.568765918Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=03abf4da-877a-4780-9e2a-621e13da9379 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.570552682Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c2e508f-71eb-4a21-b92e-dd25338cf844 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.574005144Z" level=info msg="Creating container: default/busybox/busybox" id=1b075903-e1dc-4fa7-886d-fc7e10c0e5c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.574160402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.578117366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.57871339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.615343177Z" level=info msg="Created container ff32d356dde416da652cbf7c7c6d9b5375def253e35aecb9c6605211df099864: default/busybox/busybox" id=1b075903-e1dc-4fa7-886d-fc7e10c0e5c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.616082268Z" level=info msg="Starting container: ff32d356dde416da652cbf7c7c6d9b5375def253e35aecb9c6605211df099864" id=489eb5dd-d28e-4a93-8305-4448a3ba8577 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:14:12 embed-certs-106968 crio[771]: time="2025-10-25T09:14:12.618243848Z" level=info msg="Started container" PID=1921 containerID=ff32d356dde416da652cbf7c7c6d9b5375def253e35aecb9c6605211df099864 description=default/busybox/busybox id=489eb5dd-d28e-4a93-8305-4448a3ba8577 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b380c41ed81ada45378bdbb29a12c884a6cf2eb56a53e495f195d4d9a98576d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ff32d356dde41       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   5b380c41ed81a       busybox                                      default
	5911f9c5cdf18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago       Running             coredns                   0                   d7fe03b5042f5       coredns-66bc5c9577-dx4j4                     kube-system
	67cb8a0841524       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago       Running             storage-provisioner       0                   926b90c62c5be       storage-provisioner                          kube-system
	110830fc33a6f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      52 seconds ago       Running             kube-proxy                0                   9312bac4756ee       kube-proxy-sm8hw                             kube-system
	c82b765ae9502       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      52 seconds ago       Running             kindnet-cni               0                   78ada91fd0ca3       kindnet-cf69x                                kube-system
	097613e00b6d6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   558000c6dce1f       kube-apiserver-embed-certs-106968            kube-system
	b5964c83fd646       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   47a4847ab8fe0       kube-controller-manager-embed-certs-106968   kube-system
	373fd3cb2c03b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   372e08304c8d7       etcd-embed-certs-106968                      kube-system
	ab33f6a034146       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   f9c9a2df745ba       kube-scheduler-embed-certs-106968            kube-system
	
	
	==> coredns [5911f9c5cdf18133681f5cf989145599cbcaae783f1f319f06e52ff11166b1ea] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53072 - 15218 "HINFO IN 7229483092094720377.7189873568511772862. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080209046s
	
	
	==> describe nodes <==
	Name:               embed-certs-106968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-106968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=embed-certs-106968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-106968
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:14:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:14:08 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:14:08 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:14:08 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:14:08 +0000   Sat, 25 Oct 2025 09:14:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-106968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a272e628-6722-4504-b4e0-39037ebf73c9
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-dx4j4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-embed-certs-106968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-cf69x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-embed-certs-106968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-106968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-sm8hw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-embed-certs-106968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node embed-certs-106968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node embed-certs-106968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node embed-certs-106968 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node embed-certs-106968 event: Registered Node embed-certs-106968 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-106968 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [373fd3cb2c03b9654a139312ecec4b3bb7a1cfeef0401bea42e3f6b89c797968] <==
	{"level":"warn","ts":"2025-10-25T09:13:18.285862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.295831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.304809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.311183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.325673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.333173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.339700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.346387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.352806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.359511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.365978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.373272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.379698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.387616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.394591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.400892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.409595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.417147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.423926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.430429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.450657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.458388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.465014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:13:18.517075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:06.299143Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.959705ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765720487821686 > lease_revoke:<id:5b339a1aa4c40121>","response":"size:29"}
	
	
	==> kernel <==
	 09:14:20 up 56 min,  0 user,  load average: 2.23, 3.01, 2.12
	Linux embed-certs-106968 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c82b765ae9502f651ea94fddc094059755a7a9317cfe9cd2e91c1460f78f3d22] <==
	I1025 09:13:27.639047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:13:27.639274       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:13:27.639398       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:13:27.639413       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:13:27.639431       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:13:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:13:27.841130       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:13:27.841256       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:13:27.841278       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:13:27.841393       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:13:57.841447       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:13:57.841451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:13:57.841451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:13:57.841513       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 09:13:59.241720       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:13:59.241752       1 metrics.go:72] Registering metrics
	I1025 09:13:59.241824       1 controller.go:711] "Syncing nftables rules"
	I1025 09:14:07.847777       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:14:07.847812       1 main.go:301] handling current node
	I1025 09:14:17.843803       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:14:17.843844       1 main.go:301] handling current node
	
	
	==> kube-apiserver [097613e00b6d6fe967c5288a40a4a6c108c7d677eb8861f80557b52e49780a82] <==
	I1025 09:13:18.978924       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:13:18.978958       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 09:13:18.983955       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:13:18.990477       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:13:18.994795       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:13:19.002180       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:19.881827       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:13:19.886071       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:13:19.886090       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:13:20.462839       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:13:20.504744       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:13:20.584406       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:13:20.592174       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1025 09:13:20.593463       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:13:20.600735       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:13:20.899669       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:13:21.416487       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:13:21.428453       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:13:21.436802       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:13:26.552793       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:26.556497       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:13:26.800722       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:13:27.002884       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:13:27.002928       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 09:14:18.599105       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:60808: use of closed network connection
	
	
	==> kube-controller-manager [b5964c83fd64689c62e6e80b74f1f31b4ac6374796ab50fbb1d710905ab7a135] <==
	I1025 09:13:25.898126       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-106968"
	I1025 09:13:25.898195       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:13:25.899197       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:13:25.899222       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:13:25.899281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:13:25.899291       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:13:25.899392       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:13:25.899507       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:13:25.899656       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:13:25.899664       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:13:25.899739       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:13:25.899758       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:13:25.899803       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:13:25.901041       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:13:25.903169       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:13:25.903233       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:13:25.903274       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:13:25.903287       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:13:25.903292       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:13:25.903279       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:13:25.905394       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:13:25.910566       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-106968" podCIDRs=["10.244.0.0/24"]
	I1025 09:13:25.910896       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:13:25.921089       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:14:10.905125       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [110830fc33a6f095a13df20d967cda9f0ef0640621f82369968bb7f1ef2c0076] <==
	I1025 09:13:27.471950       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:13:27.532968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:13:27.634054       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:13:27.634119       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 09:13:27.634235       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:13:27.658151       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:13:27.658223       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:13:27.664826       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:13:27.665155       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:13:27.665192       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:13:27.666730       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:13:27.666763       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:13:27.666819       1 config.go:200] "Starting service config controller"
	I1025 09:13:27.666825       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:13:27.666830       1 config.go:309] "Starting node config controller"
	I1025 09:13:27.666840       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:13:27.666847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:13:27.666859       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:13:27.666865       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:13:27.767300       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:13:27.767327       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:13:27.767316       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ab33f6a034146fa2cd7aaa0d2f3aac1bfd815f760a145cbc1dbfff7e27677481] <==
	E1025 09:13:18.934706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:13:18.934913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:13:18.934933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:13:18.934955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:13:18.934974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:13:18.935028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:13:18.935145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:13:18.935672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:13:18.935715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:13:18.935726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:13:18.935755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:13:19.743211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:13:19.753677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:13:19.800093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:13:19.867216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:13:19.962538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:13:20.000996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:13:20.042852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:13:20.055132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:13:20.072650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:13:20.081921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:13:20.118216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:13:20.139797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:13:20.441536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:13:23.730440       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:13:22 embed-certs-106968 kubelet[1311]: I1025 09:13:22.325430    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-106968" podStartSLOduration=1.325408733 podStartE2EDuration="1.325408733s" podCreationTimestamp="2025-10-25 09:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:22.324888926 +0000 UTC m=+1.152061060" watchObservedRunningTime="2025-10-25 09:13:22.325408733 +0000 UTC m=+1.152580861"
	Oct 25 09:13:22 embed-certs-106968 kubelet[1311]: I1025 09:13:22.336979    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-106968" podStartSLOduration=1.3369565319999999 podStartE2EDuration="1.336956532s" podCreationTimestamp="2025-10-25 09:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:22.33629607 +0000 UTC m=+1.163468204" watchObservedRunningTime="2025-10-25 09:13:22.336956532 +0000 UTC m=+1.164128665"
	Oct 25 09:13:22 embed-certs-106968 kubelet[1311]: I1025 09:13:22.367398    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-106968" podStartSLOduration=1.367376984 podStartE2EDuration="1.367376984s" podCreationTimestamp="2025-10-25 09:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:22.356594099 +0000 UTC m=+1.183766235" watchObservedRunningTime="2025-10-25 09:13:22.367376984 +0000 UTC m=+1.194549118"
	Oct 25 09:13:22 embed-certs-106968 kubelet[1311]: I1025 09:13:22.378891    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-106968" podStartSLOduration=2.378869419 podStartE2EDuration="2.378869419s" podCreationTimestamp="2025-10-25 09:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:22.367562327 +0000 UTC m=+1.194734462" watchObservedRunningTime="2025-10-25 09:13:22.378869419 +0000 UTC m=+1.206041554"
	Oct 25 09:13:25 embed-certs-106968 kubelet[1311]: I1025 09:13:25.913334    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:13:25 embed-certs-106968 kubelet[1311]: I1025 09:13:25.914634    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095049    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a41162a2-bd3f-438a-a1e1-20b47711ed13-lib-modules\") pod \"kindnet-cf69x\" (UID: \"a41162a2-bd3f-438a-a1e1-20b47711ed13\") " pod="kube-system/kindnet-cf69x"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095117    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a41162a2-bd3f-438a-a1e1-20b47711ed13-xtables-lock\") pod \"kindnet-cf69x\" (UID: \"a41162a2-bd3f-438a-a1e1-20b47711ed13\") " pod="kube-system/kindnet-cf69x"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095142    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gv5d\" (UniqueName: \"kubernetes.io/projected/a41162a2-bd3f-438a-a1e1-20b47711ed13-kube-api-access-9gv5d\") pod \"kindnet-cf69x\" (UID: \"a41162a2-bd3f-438a-a1e1-20b47711ed13\") " pod="kube-system/kindnet-cf69x"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095176    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/080ad068-2357-4398-a8b8-ee46ec2f6a7c-lib-modules\") pod \"kube-proxy-sm8hw\" (UID: \"080ad068-2357-4398-a8b8-ee46ec2f6a7c\") " pod="kube-system/kube-proxy-sm8hw"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095196    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblrj\" (UniqueName: \"kubernetes.io/projected/080ad068-2357-4398-a8b8-ee46ec2f6a7c-kube-api-access-rblrj\") pod \"kube-proxy-sm8hw\" (UID: \"080ad068-2357-4398-a8b8-ee46ec2f6a7c\") " pod="kube-system/kube-proxy-sm8hw"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095216    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a41162a2-bd3f-438a-a1e1-20b47711ed13-cni-cfg\") pod \"kindnet-cf69x\" (UID: \"a41162a2-bd3f-438a-a1e1-20b47711ed13\") " pod="kube-system/kindnet-cf69x"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095238    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/080ad068-2357-4398-a8b8-ee46ec2f6a7c-kube-proxy\") pod \"kube-proxy-sm8hw\" (UID: \"080ad068-2357-4398-a8b8-ee46ec2f6a7c\") " pod="kube-system/kube-proxy-sm8hw"
	Oct 25 09:13:27 embed-certs-106968 kubelet[1311]: I1025 09:13:27.095262    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/080ad068-2357-4398-a8b8-ee46ec2f6a7c-xtables-lock\") pod \"kube-proxy-sm8hw\" (UID: \"080ad068-2357-4398-a8b8-ee46ec2f6a7c\") " pod="kube-system/kube-proxy-sm8hw"
	Oct 25 09:13:28 embed-certs-106968 kubelet[1311]: I1025 09:13:28.324891    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sm8hw" podStartSLOduration=1.324868466 podStartE2EDuration="1.324868466s" podCreationTimestamp="2025-10-25 09:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:28.324570093 +0000 UTC m=+7.151742226" watchObservedRunningTime="2025-10-25 09:13:28.324868466 +0000 UTC m=+7.152040600"
	Oct 25 09:13:28 embed-certs-106968 kubelet[1311]: I1025 09:13:28.346731    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cf69x" podStartSLOduration=1.346704191 podStartE2EDuration="1.346704191s" podCreationTimestamp="2025-10-25 09:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:13:28.346618838 +0000 UTC m=+7.173790972" watchObservedRunningTime="2025-10-25 09:13:28.346704191 +0000 UTC m=+7.173876326"
	Oct 25 09:14:08 embed-certs-106968 kubelet[1311]: I1025 09:14:08.352440    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:14:08 embed-certs-106968 kubelet[1311]: I1025 09:14:08.497258    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aeff6e0f-be6e-4b3a-aa46-b142043c94e4-tmp\") pod \"storage-provisioner\" (UID: \"aeff6e0f-be6e-4b3a-aa46-b142043c94e4\") " pod="kube-system/storage-provisioner"
	Oct 25 09:14:08 embed-certs-106968 kubelet[1311]: I1025 09:14:08.497317    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx8s6\" (UniqueName: \"kubernetes.io/projected/642b0204-f78e-4036-9b60-f7dafda21646-kube-api-access-fx8s6\") pod \"coredns-66bc5c9577-dx4j4\" (UID: \"642b0204-f78e-4036-9b60-f7dafda21646\") " pod="kube-system/coredns-66bc5c9577-dx4j4"
	Oct 25 09:14:08 embed-certs-106968 kubelet[1311]: I1025 09:14:08.497416    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnp6p\" (UniqueName: \"kubernetes.io/projected/aeff6e0f-be6e-4b3a-aa46-b142043c94e4-kube-api-access-nnp6p\") pod \"storage-provisioner\" (UID: \"aeff6e0f-be6e-4b3a-aa46-b142043c94e4\") " pod="kube-system/storage-provisioner"
	Oct 25 09:14:08 embed-certs-106968 kubelet[1311]: I1025 09:14:08.497456    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642b0204-f78e-4036-9b60-f7dafda21646-config-volume\") pod \"coredns-66bc5c9577-dx4j4\" (UID: \"642b0204-f78e-4036-9b60-f7dafda21646\") " pod="kube-system/coredns-66bc5c9577-dx4j4"
	Oct 25 09:14:09 embed-certs-106968 kubelet[1311]: I1025 09:14:09.415344    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dx4j4" podStartSLOduration=42.415319528 podStartE2EDuration="42.415319528s" podCreationTimestamp="2025-10-25 09:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:09.415315444 +0000 UTC m=+48.242487578" watchObservedRunningTime="2025-10-25 09:14:09.415319528 +0000 UTC m=+48.242491663"
	Oct 25 09:14:11 embed-certs-106968 kubelet[1311]: I1025 09:14:11.489387    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.489360843 podStartE2EDuration="44.489360843s" podCreationTimestamp="2025-10-25 09:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:09.440854513 +0000 UTC m=+48.268026644" watchObservedRunningTime="2025-10-25 09:14:11.489360843 +0000 UTC m=+50.316532978"
	Oct 25 09:14:11 embed-certs-106968 kubelet[1311]: I1025 09:14:11.614605    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw2sc\" (UniqueName: \"kubernetes.io/projected/05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe-kube-api-access-nw2sc\") pod \"busybox\" (UID: \"05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe\") " pod="default/busybox"
	Oct 25 09:14:13 embed-certs-106968 kubelet[1311]: I1025 09:14:13.427706    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6725069750000001 podStartE2EDuration="2.427681827s" podCreationTimestamp="2025-10-25 09:14:11 +0000 UTC" firstStartedPulling="2025-10-25 09:14:11.814752296 +0000 UTC m=+50.641924412" lastFinishedPulling="2025-10-25 09:14:12.569927127 +0000 UTC m=+51.397099264" observedRunningTime="2025-10-25 09:14:13.42759933 +0000 UTC m=+52.254771477" watchObservedRunningTime="2025-10-25 09:14:13.427681827 +0000 UTC m=+52.254853961"
	
	
	==> storage-provisioner [67cb8a08415240820fcea90d22df57118e54a6a580115b596beed31a5a69e9b8] <==
	I1025 09:14:08.741991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:14:08.750460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:14:08.750510       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:14:08.752540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:08.757077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:14:08.757243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:14:08.757483       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-106968_e1224fd9-3489-42fd-ba17-56bdf0843fd9!
	I1025 09:14:08.757806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e170d88-5532-46a5-99b3-fc8a977a4e4b", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-106968_e1224fd9-3489-42fd-ba17-56bdf0843fd9 became leader
	W1025 09:14:08.759616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:08.762583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:14:08.858700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-106968_e1224fd9-3489-42fd-ba17-56bdf0843fd9!
	W1025 09:14:10.765760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:10.771460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:12.780941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:12.788711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:14.792454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:14.798387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:16.802229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:16.806814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:18.810860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:14:18.815855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106968 -n embed-certs-106968
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-106968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.639833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:14:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-036155
helpers_test.go:243: (dbg) docker inspect newest-cni-036155:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb",
	        "Created": "2025-10-25T09:14:06.682120526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:06.722728181Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/hostname",
	        "HostsPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/hosts",
	        "LogPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb-json.log",
	        "Name": "/newest-cni-036155",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-036155:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-036155",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb",
	                "LowerDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-036155",
	                "Source": "/var/lib/docker/volumes/newest-cni-036155/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-036155",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-036155",
	                "name.minikube.sigs.k8s.io": "newest-cni-036155",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "806d77948b4daef58d4936a5415c083a82f99820528f95525f2e6a54117145f4",
	            "SandboxKey": "/var/run/docker/netns/806d77948b4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-036155": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:f3:32:e9:7e:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ead30f4723103afe9c35f4580c74d2202de41578e0480b83c36d81600895331e",
	                    "EndpointID": "562f9268aee12c37204ea7980a68d78064273e5a2812faee001fe70cc3b31ee7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-036155",
	                        "09a0c00b2999"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-036155 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:11 UTC │
	│ start   │ -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:11 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-016092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ stop    │ -p no-preload-016092 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ addons  │ enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:13 UTC │
	│ image   │ old-k8s-version-959110 image list --format=json                                                                                                                                                                                               │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ pause   │ -p old-k8s-version-959110 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │                     │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ delete  │ -p old-k8s-version-959110                                                                                                                                                                                                                     │ old-k8s-version-959110       │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-891466 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:14:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:14:01.349429  259325 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:14:01.349695  259325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:01.349703  259325 out.go:374] Setting ErrFile to fd 2...
	I1025 09:14:01.349707  259325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:01.349881  259325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:14:01.350326  259325 out.go:368] Setting JSON to false
	I1025 09:14:01.351488  259325 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3389,"bootTime":1761380252,"procs":372,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:14:01.351566  259325 start.go:141] virtualization: kvm guest
	I1025 09:14:01.353581  259325 out.go:179] * [newest-cni-036155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:14:01.354862  259325 notify.go:220] Checking for updates...
	I1025 09:14:01.354911  259325 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:14:01.356248  259325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:14:01.357829  259325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:14:01.359191  259325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:14:01.360570  259325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:14:01.362056  259325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:14:01.363964  259325 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364078  259325 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364155  259325 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:01.364286  259325 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:14:01.388723  259325 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:14:01.388851  259325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:14:01.446757  259325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:14:01.436278421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:14:01.446909  259325 docker.go:318] overlay module found
	I1025 09:14:01.448814  259325 out.go:179] * Using the docker driver based on user configuration
	I1025 09:14:01.449910  259325 start.go:305] selected driver: docker
	I1025 09:14:01.449923  259325 start.go:925] validating driver "docker" against <nil>
	I1025 09:14:01.449933  259325 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:14:01.450511  259325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:14:01.511090  259325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:14:01.500485086 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:14:01.511242  259325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 09:14:01.511267  259325 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 09:14:01.511481  259325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:14:01.513762  259325 out.go:179] * Using Docker driver with root privileges
	I1025 09:14:01.514937  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:01.515024  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:01.515037  259325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:14:01.515128  259325 start.go:349] cluster config:
	{Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:14:01.516524  259325 out.go:179] * Starting "newest-cni-036155" primary control-plane node in "newest-cni-036155" cluster
	I1025 09:14:01.517782  259325 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:14:01.518984  259325 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:14:01.520226  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:01.520270  259325 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:14:01.520295  259325 cache.go:58] Caching tarball of preloaded images
	I1025 09:14:01.520378  259325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:14:01.520391  259325 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:14:01.520490  259325 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:14:01.520629  259325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json ...
	I1025 09:14:01.520680  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json: {Name:mkbfe9b74fbf6dcc9fce3c2e514dd100d024d023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:01.542057  259325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:14:01.542076  259325 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:14:01.542091  259325 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:14:01.542116  259325 start.go:360] acquireMachinesLock for newest-cni-036155: {Name:mk5b9af4be10aaa846ed9c8c31160df3caae8c3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:14:01.542211  259325 start.go:364] duration metric: took 81.03µs to acquireMachinesLock for "newest-cni-036155"
	I1025 09:14:01.542235  259325 start.go:93] Provisioning new machine with config: &{Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:14:01.542374  259325 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:13:58.278667  225660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.058221196s)
	W1025 09:13:58.278709  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1025 09:13:58.278726  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:13:58.278748  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:13:58.315063  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:13:58.315094  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:13:58.352625  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:13:58.352693  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:13:58.381187  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:13:58.381214  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:13:58.436157  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:13:58.436186  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:13:58.492499  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:13:58.492535  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:13:58.528534  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:13:58.528568  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:13:58.632433  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:13:58.632471  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:01.149149  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:02.578502  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:57860->192.168.85.2:8443: read: connection reset by peer
	I1025 09:14:02.578582  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:02.578671  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:02.612993  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:02.613015  225660 cri.go:89] found id: "4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:14:02.613019  225660 cri.go:89] found id: ""
	I1025 09:14:02.613026  225660 logs.go:282] 2 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0]
	I1025 09:14:02.613087  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.617248  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.621187  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:02.621252  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:02.651262  225660 cri.go:89] found id: ""
	I1025 09:14:02.651292  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.651304  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:02.651315  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:02.651375  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:02.680223  225660 cri.go:89] found id: ""
	I1025 09:14:02.680246  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.680255  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:02.680261  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:02.680304  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:02.708376  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:02.708400  225660 cri.go:89] found id: ""
	I1025 09:14:02.708419  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:02.708470  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.712497  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:02.712567  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:02.743096  225660 cri.go:89] found id: ""
	I1025 09:14:02.743123  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.743135  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:02.743142  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:02.743189  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:02.776405  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:02.776424  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:02.776428  225660 cri.go:89] found id: ""
	I1025 09:14:02.776435  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:02.776494  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.780906  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:02.784758  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:02.784832  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1025 09:13:59.435798  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:01.935111  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:01.271430  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:14:03.770895  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:14:01.544621  259325 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:14:01.544863  259325 start.go:159] libmachine.API.Create for "newest-cni-036155" (driver="docker")
	I1025 09:14:01.544898  259325 client.go:168] LocalClient.Create starting
	I1025 09:14:01.544971  259325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:14:01.545008  259325 main.go:141] libmachine: Decoding PEM data...
	I1025 09:14:01.545033  259325 main.go:141] libmachine: Parsing certificate...
	I1025 09:14:01.545103  259325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:14:01.545131  259325 main.go:141] libmachine: Decoding PEM data...
	I1025 09:14:01.545157  259325 main.go:141] libmachine: Parsing certificate...
	I1025 09:14:01.545523  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:14:01.564874  259325 cli_runner.go:211] docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:14:01.564937  259325 network_create.go:284] running [docker network inspect newest-cni-036155] to gather additional debugging logs...
	I1025 09:14:01.564956  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155
	W1025 09:14:01.582897  259325 cli_runner.go:211] docker network inspect newest-cni-036155 returned with exit code 1
	I1025 09:14:01.582929  259325 network_create.go:287] error running [docker network inspect newest-cni-036155]: docker network inspect newest-cni-036155: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-036155 not found
	I1025 09:14:01.582945  259325 network_create.go:289] output of [docker network inspect newest-cni-036155]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-036155 not found
	
	** /stderr **
	I1025 09:14:01.583104  259325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:14:01.601343  259325 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:14:01.602058  259325 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:14:01.602766  259325 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:14:01.603404  259325 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:14:01.603905  259325 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9aa42478a513 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:4e:f8:f5:5b:2e} reservation:<nil>}
	I1025 09:14:01.604415  259325 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5d58a21465e1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4e:78:a8:09:a3:02} reservation:<nil>}
	I1025 09:14:01.605183  259325 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb4940}
	I1025 09:14:01.605204  259325 network_create.go:124] attempt to create docker network newest-cni-036155 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:14:01.605249  259325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-036155 newest-cni-036155
	I1025 09:14:01.664530  259325 network_create.go:108] docker network newest-cni-036155 192.168.103.0/24 created
	I1025 09:14:01.664563  259325 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-036155" container
	I1025 09:14:01.664653  259325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:14:01.684160  259325 cli_runner.go:164] Run: docker volume create newest-cni-036155 --label name.minikube.sigs.k8s.io=newest-cni-036155 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:14:01.703110  259325 oci.go:103] Successfully created a docker volume newest-cni-036155
	I1025 09:14:01.703199  259325 cli_runner.go:164] Run: docker run --rm --name newest-cni-036155-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-036155 --entrypoint /usr/bin/test -v newest-cni-036155:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:14:02.100402  259325 oci.go:107] Successfully prepared a docker volume newest-cni-036155
	I1025 09:14:02.100450  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:02.100473  259325 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:14:02.100556  259325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-036155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:14:02.813543  225660 cri.go:89] found id: ""
	I1025 09:14:02.813571  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.813581  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:02.813588  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:02.813668  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:02.843013  225660 cri.go:89] found id: ""
	I1025 09:14:02.843039  225660 logs.go:282] 0 containers: []
	W1025 09:14:02.843049  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:02.843065  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:02.843079  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:02.858191  225660 logs.go:123] Gathering logs for kube-apiserver [4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0] ...
	I1025 09:14:02.858224  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4c90b35e800834020b69e120060b403799332dbd66e90e5d079f2d32711f21b0"
	I1025 09:14:02.894345  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:02.894398  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:02.924538  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:02.924566  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:02.981267  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:02.981304  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:03.096416  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:03.096461  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:03.168015  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:03.168040  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:03.168054  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:03.205969  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:03.206012  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:03.271485  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:03.271526  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:03.300749  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:03.300783  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:05.840548  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:05.841022  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:05.841081  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:05.841139  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:05.869264  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:05.869286  225660 cri.go:89] found id: ""
	I1025 09:14:05.869293  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:05.869340  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:05.873358  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:05.873414  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:05.901366  225660 cri.go:89] found id: ""
	I1025 09:14:05.901395  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.901406  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:05.901413  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:05.901467  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:05.931032  225660 cri.go:89] found id: ""
	I1025 09:14:05.931059  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.931069  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:05.931076  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:05.931142  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:05.959495  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:05.959515  225660 cri.go:89] found id: ""
	I1025 09:14:05.959523  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:05.959567  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:05.963756  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:05.963826  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:05.991899  225660 cri.go:89] found id: ""
	I1025 09:14:05.991925  225660 logs.go:282] 0 containers: []
	W1025 09:14:05.991943  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:05.991953  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:05.992018  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:06.019791  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:06.019811  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:06.019815  225660 cri.go:89] found id: ""
	I1025 09:14:06.019822  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:06.019886  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:06.024190  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:06.028096  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:06.028161  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:06.055987  225660 cri.go:89] found id: ""
	I1025 09:14:06.056018  225660 logs.go:282] 0 containers: []
	W1025 09:14:06.056029  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:06.056035  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:06.056090  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:06.083950  225660 cri.go:89] found id: ""
	I1025 09:14:06.083976  225660 logs.go:282] 0 containers: []
	W1025 09:14:06.083987  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:06.084004  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:06.084019  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:06.110553  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:06.110582  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:06.164204  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:06.164238  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:06.253207  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:06.253241  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:06.313928  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:06.313953  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:06.313968  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:06.346421  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:06.346466  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:06.361467  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:06.361496  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:06.393406  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:06.393444  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:06.444918  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:06.444948  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	W1025 09:14:03.935273  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	W1025 09:14:06.435636  253344 node_ready.go:57] node "default-k8s-diff-port-891466" has "Ready":"False" status (will retry)
	I1025 09:14:06.935349  253344 node_ready.go:49] node "default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:06.935378  253344 node_ready.go:38] duration metric: took 11.503747191s for node "default-k8s-diff-port-891466" to be "Ready" ...
	I1025 09:14:06.935390  253344 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:06.935479  253344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:06.948160  253344 api_server.go:72] duration metric: took 11.823550151s to wait for apiserver process to appear ...
	I1025 09:14:06.948193  253344 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:06.948215  253344 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:14:06.953340  253344 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:14:06.954553  253344 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:06.954586  253344 api_server.go:131] duration metric: took 6.384823ms to wait for apiserver health ...
	I1025 09:14:06.954598  253344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:06.958083  253344 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:06.958116  253344 system_pods.go:61] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:06.958122  253344 system_pods.go:61] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:06.958130  253344 system_pods.go:61] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:06.958135  253344 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:06.958140  253344 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:06.958151  253344 system_pods.go:61] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:06.958156  253344 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:06.958167  253344 system_pods.go:61] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:06.958175  253344 system_pods.go:74] duration metric: took 3.569351ms to wait for pod list to return data ...
	I1025 09:14:06.958188  253344 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:06.960663  253344 default_sa.go:45] found service account: "default"
	I1025 09:14:06.960687  253344 default_sa.go:55] duration metric: took 2.491182ms for default service account to be created ...
	I1025 09:14:06.960698  253344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:14:06.963911  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:06.963945  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:06.963955  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:06.963967  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:06.963974  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:06.963981  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:06.963989  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:06.964176  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:06.964191  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:06.964221  253344 retry.go:31] will retry after 290.946821ms: missing components: kube-dns
	I1025 09:14:07.261256  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.261299  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.261308  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.261319  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.261325  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.261331  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.261372  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.261383  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.261392  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.261412  253344 retry.go:31] will retry after 251.1932ms: missing components: kube-dns
	I1025 09:14:07.516457  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.516488  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.516494  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.516500  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.516504  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.516508  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.516512  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.516517  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.516524  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.516541  253344 retry.go:31] will retry after 312.108611ms: missing components: kube-dns
	I1025 09:14:07.832521  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:07.832555  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:07.832561  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:07.832567  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:07.832573  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:07.832577  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:07.832580  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:07.832584  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:07.832591  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:07.832610  253344 retry.go:31] will retry after 578.903074ms: missing components: kube-dns
	I1025 09:14:08.416051  253344 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.416084  253344 system_pods.go:89] "coredns-66bc5c9577-72zpn" [3f0ca3b1-36e4-4471-862a-9eabfb9074aa] Running
	I1025 09:14:08.416092  253344 system_pods.go:89] "etcd-default-k8s-diff-port-891466" [7d75f39f-ebee-41ae-a13b-2e307da7518f] Running
	I1025 09:14:08.416099  253344 system_pods.go:89] "kindnet-9xc2z" [133978f9-4ef3-4e01-ba53-fdf702776a49] Running
	I1025 09:14:08.416104  253344 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-891466" [cfe0a0a2-e76d-4d87-b597-8a26128794aa] Running
	I1025 09:14:08.416109  253344 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-891466" [39fec878-030f-406d-9344-b93ce2b8d235] Running
	I1025 09:14:08.416113  253344 system_pods.go:89] "kube-proxy-rmqbr" [d20569e7-e7e7-4f55-a796-3b40a97b41cb] Running
	I1025 09:14:08.416116  253344 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-891466" [6c7f34b3-8274-4699-871e-e85934222330] Running
	I1025 09:14:08.416121  253344 system_pods.go:89] "storage-provisioner" [64cdaf55-0be7-4f5c-b3f1-86b2c3bf8522] Running
	I1025 09:14:08.416131  253344 system_pods.go:126] duration metric: took 1.455426427s to wait for k8s-apps to be running ...
	I1025 09:14:08.416145  253344 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:14:08.416197  253344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:14:08.429617  253344 system_svc.go:56] duration metric: took 13.46202ms WaitForService to wait for kubelet
	I1025 09:14:08.429689  253344 kubeadm.go:586] duration metric: took 13.305083699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:14:08.429711  253344 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:08.432623  253344 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:08.432665  253344 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:08.432680  253344 node_conditions.go:105] duration metric: took 2.964083ms to run NodePressure ...
	I1025 09:14:08.432693  253344 start.go:241] waiting for startup goroutines ...
	I1025 09:14:08.432702  253344 start.go:246] waiting for cluster config update ...
	I1025 09:14:08.432717  253344 start.go:255] writing updated cluster config ...
	I1025 09:14:08.432974  253344 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:08.436927  253344 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:08.440402  253344 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.444662  253344 pod_ready.go:94] pod "coredns-66bc5c9577-72zpn" is "Ready"
	I1025 09:14:08.444683  253344 pod_ready.go:86] duration metric: took 4.260186ms for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.446669  253344 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.450415  253344 pod_ready.go:94] pod "etcd-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.450440  253344 pod_ready.go:86] duration metric: took 3.750274ms for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.452271  253344 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.455682  253344 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.455704  253344 pod_ready.go:86] duration metric: took 3.413528ms for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.457512  253344 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:14:05.771472  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	W1025 09:14:08.271104  247074 node_ready.go:57] node "embed-certs-106968" has "Ready":"False" status (will retry)
	I1025 09:14:08.770948  247074 node_ready.go:49] node "embed-certs-106968" is "Ready"
	I1025 09:14:08.770978  247074 node_ready.go:38] duration metric: took 41.503136723s for node "embed-certs-106968" to be "Ready" ...
	I1025 09:14:08.770991  247074 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:08.771040  247074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:08.786566  247074 api_server.go:72] duration metric: took 41.819658043s to wait for apiserver process to appear ...
	I1025 09:14:08.786597  247074 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:08.786620  247074 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1025 09:14:08.791819  247074 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1025 09:14:08.792653  247074 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:08.792675  247074 api_server.go:131] duration metric: took 6.071281ms to wait for apiserver health ...
	I1025 09:14:08.792683  247074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:08.796024  247074 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:08.796066  247074 system_pods.go:61] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.796076  247074 system_pods.go:61] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.796088  247074 system_pods.go:61] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.796094  247074 system_pods.go:61] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.796103  247074 system_pods.go:61] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.796108  247074 system_pods.go:61] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.796114  247074 system_pods.go:61] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.796119  247074 system_pods.go:61] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.796133  247074 system_pods.go:74] duration metric: took 3.442989ms to wait for pod list to return data ...
	I1025 09:14:08.796148  247074 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:08.798369  247074 default_sa.go:45] found service account: "default"
	I1025 09:14:08.798387  247074 default_sa.go:55] duration metric: took 2.229844ms for default service account to be created ...
	I1025 09:14:08.798394  247074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:14:08.801058  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.801082  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.801088  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.801093  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.801096  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.801100  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.801104  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.801107  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.801112  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.801132  247074 retry.go:31] will retry after 190.781972ms: missing components: kube-dns
	I1025 09:14:08.995887  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:08.995925  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:08.995933  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:08.995941  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:08.995947  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:08.995954  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:08.995959  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:08.995966  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:08.995974  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:08.995996  247074 retry.go:31] will retry after 247.582365ms: missing components: kube-dns
	I1025 09:14:09.247882  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:09.247915  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:14:09.247921  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:09.247927  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:09.247931  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:09.247935  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:09.247940  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:09.247944  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:09.247949  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:14:09.247963  247074 retry.go:31] will retry after 418.536389ms: missing components: kube-dns
	I1025 09:14:09.670936  247074 system_pods.go:86] 8 kube-system pods found
	I1025 09:14:09.670969  247074 system_pods.go:89] "coredns-66bc5c9577-dx4j4" [642b0204-f78e-4036-9b60-f7dafda21646] Running
	I1025 09:14:09.670977  247074 system_pods.go:89] "etcd-embed-certs-106968" [bf9c0326-29d7-425b-918c-816d4295c409] Running
	I1025 09:14:09.670983  247074 system_pods.go:89] "kindnet-cf69x" [a41162a2-bd3f-438a-a1e1-20b47711ed13] Running
	I1025 09:14:09.670988  247074 system_pods.go:89] "kube-apiserver-embed-certs-106968" [df3a270b-ce81-4bc5-994e-e567942a005f] Running
	I1025 09:14:09.670993  247074 system_pods.go:89] "kube-controller-manager-embed-certs-106968" [54201e73-1694-4a71-8c00-4d881b46b2b4] Running
	I1025 09:14:09.670998  247074 system_pods.go:89] "kube-proxy-sm8hw" [080ad068-2357-4398-a8b8-ee46ec2f6a7c] Running
	I1025 09:14:09.671006  247074 system_pods.go:89] "kube-scheduler-embed-certs-106968" [62d2ed8a-7465-4815-84c9-85247e0d8248] Running
	I1025 09:14:09.671011  247074 system_pods.go:89] "storage-provisioner" [aeff6e0f-be6e-4b3a-aa46-b142043c94e4] Running
	I1025 09:14:09.671021  247074 system_pods.go:126] duration metric: took 872.62006ms to wait for k8s-apps to be running ...
	I1025 09:14:09.671033  247074 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:14:09.671082  247074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:14:09.684149  247074 system_svc.go:56] duration metric: took 13.109824ms WaitForService to wait for kubelet
	I1025 09:14:09.684176  247074 kubeadm.go:586] duration metric: took 42.717274637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:14:09.684197  247074 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:09.687014  247074 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:09.687037  247074 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:09.687050  247074 node_conditions.go:105] duration metric: took 2.847789ms to run NodePressure ...
	I1025 09:14:09.687060  247074 start.go:241] waiting for startup goroutines ...
	I1025 09:14:09.687067  247074 start.go:246] waiting for cluster config update ...
	I1025 09:14:09.687077  247074 start.go:255] writing updated cluster config ...
	I1025 09:14:09.687328  247074 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:09.691103  247074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:09.694610  247074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.698843  247074 pod_ready.go:94] pod "coredns-66bc5c9577-dx4j4" is "Ready"
	I1025 09:14:09.698866  247074 pod_ready.go:86] duration metric: took 4.23265ms for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.700733  247074 pod_ready.go:83] waiting for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.704283  247074 pod_ready.go:94] pod "etcd-embed-certs-106968" is "Ready"
	I1025 09:14:09.704303  247074 pod_ready.go:86] duration metric: took 3.551149ms for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.706066  247074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.709547  247074 pod_ready.go:94] pod "kube-apiserver-embed-certs-106968" is "Ready"
	I1025 09:14:09.709564  247074 pod_ready.go:86] duration metric: took 3.482629ms for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.711117  247074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:08.840767  253344 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:08.840794  253344 pod_ready.go:86] duration metric: took 383.263633ms for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.041420  253344 pod_ready.go:83] waiting for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.441977  253344 pod_ready.go:94] pod "kube-proxy-rmqbr" is "Ready"
	I1025 09:14:09.442007  253344 pod_ready.go:86] duration metric: took 400.561652ms for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:09.641678  253344 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.041042  253344 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-891466" is "Ready"
	I1025 09:14:10.041068  253344 pod_ready.go:86] duration metric: took 399.361298ms for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.041080  253344 pod_ready.go:40] duration metric: took 1.604125716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:10.083846  253344 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:10.085911  253344 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-891466" cluster and "default" namespace by default
	I1025 09:14:10.095667  247074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-106968" is "Ready"
	I1025 09:14:10.095699  247074 pod_ready.go:86] duration metric: took 384.564763ms for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.296396  247074 pod_ready.go:83] waiting for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.695915  247074 pod_ready.go:94] pod "kube-proxy-sm8hw" is "Ready"
	I1025 09:14:10.695940  247074 pod_ready.go:86] duration metric: took 399.512784ms for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:10.895258  247074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:11.295963  247074 pod_ready.go:94] pod "kube-scheduler-embed-certs-106968" is "Ready"
	I1025 09:14:11.295996  247074 pod_ready.go:86] duration metric: took 400.705834ms for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:14:11.296011  247074 pod_ready.go:40] duration metric: took 1.604868452s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:14:11.348313  247074 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:06.610431  259325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-036155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.50981258s)
	I1025 09:14:06.610467  259325 kic.go:203] duration metric: took 4.509989969s to extract preloaded images to volume ...
	W1025 09:14:06.610587  259325 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:14:06.610634  259325 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:14:06.610712  259325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:14:06.666144  259325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-036155 --name newest-cni-036155 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-036155 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-036155 --network newest-cni-036155 --ip 192.168.103.2 --volume newest-cni-036155:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:14:06.972900  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Running}}
	I1025 09:14:06.993336  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.013258  259325 cli_runner.go:164] Run: docker exec newest-cni-036155 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:14:07.057407  259325 oci.go:144] the created container "newest-cni-036155" has a running status.
	I1025 09:14:07.057438  259325 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa...
	I1025 09:14:07.113913  259325 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:14:07.147153  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.167068  259325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:14:07.167088  259325 kic_runner.go:114] Args: [docker exec --privileged newest-cni-036155 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:14:07.214916  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:07.241483  259325 machine.go:93] provisionDockerMachine start ...
	I1025 09:14:07.241575  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:07.268234  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:07.268673  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:07.268698  259325 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:14:07.269464  259325 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37304->127.0.0.1:33085: read: connection reset by peer
	I1025 09:14:10.411580  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-036155
	
	I1025 09:14:10.411618  259325 ubuntu.go:182] provisioning hostname "newest-cni-036155"
	I1025 09:14:10.411703  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:10.430482  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:10.430731  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:10.430747  259325 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-036155 && echo "newest-cni-036155" | sudo tee /etc/hostname
	I1025 09:14:10.585307  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-036155
	
	I1025 09:14:10.585419  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:10.606084  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:10.606313  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:10.606331  259325 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-036155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-036155/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-036155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:14:10.747795  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:14:10.747824  259325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:14:10.747864  259325 ubuntu.go:190] setting up certificates
	I1025 09:14:10.747881  259325 provision.go:84] configureAuth start
	I1025 09:14:10.747955  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:10.766485  259325 provision.go:143] copyHostCerts
	I1025 09:14:10.766572  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:14:10.766587  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:14:10.766695  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:14:10.766836  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:14:10.766852  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:14:10.766897  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:14:10.766999  259325 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:14:10.767008  259325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:14:10.767046  259325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:14:10.767144  259325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.newest-cni-036155 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-036155]
	I1025 09:14:11.350247  247074 out.go:179] * Done! kubectl is now configured to use "embed-certs-106968" cluster and "default" namespace by default
	I1025 09:14:08.972298  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:08.972739  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:08.972796  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:08.972855  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:09.003134  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:09.003160  225660 cri.go:89] found id: ""
	I1025 09:14:09.003170  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:09.003229  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.007677  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:09.007750  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:09.038302  225660 cri.go:89] found id: ""
	I1025 09:14:09.038326  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.038335  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:09.038341  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:09.038431  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:09.066635  225660 cri.go:89] found id: ""
	I1025 09:14:09.066680  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.066692  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:09.066698  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:09.066754  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:09.093560  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:09.093582  225660 cri.go:89] found id: ""
	I1025 09:14:09.093591  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:09.093678  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.097667  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:09.097735  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:09.124755  225660 cri.go:89] found id: ""
	I1025 09:14:09.124779  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.124787  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:09.124792  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:09.124838  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:09.151173  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:09.151200  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:09.151206  225660 cri.go:89] found id: ""
	I1025 09:14:09.151216  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:09.151274  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.155517  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:09.159318  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:09.159371  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:09.185902  225660 cri.go:89] found id: ""
	I1025 09:14:09.185929  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.185937  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:09.185942  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:09.185990  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:09.213382  225660 cri.go:89] found id: ""
	I1025 09:14:09.213406  225660 logs.go:282] 0 containers: []
	W1025 09:14:09.213414  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:09.213427  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:09.213437  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:09.227962  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:09.227989  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:09.286897  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:09.286914  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:09.286930  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:09.344244  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:09.344280  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:09.372387  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:09.372412  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:09.404393  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:09.404442  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:09.445740  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:09.445773  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:09.473530  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:09.473557  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:09.530325  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:09.530359  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:12.126696  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:12.127001  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:12.127041  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:12.127078  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:12.156258  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:12.156278  225660 cri.go:89] found id: ""
	I1025 09:14:12.156286  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:12.156333  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.160830  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:12.160899  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:12.189251  225660 cri.go:89] found id: ""
	I1025 09:14:12.189276  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.189284  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:12.189291  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:12.189345  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:12.218011  225660 cri.go:89] found id: ""
	I1025 09:14:12.218040  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.218051  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:12.218058  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:12.218110  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:12.246768  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:12.246792  225660 cri.go:89] found id: ""
	I1025 09:14:12.246800  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:12.246849  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.250850  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:12.250911  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:12.279387  225660 cri.go:89] found id: ""
	I1025 09:14:12.279415  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.279430  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:12.279435  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:12.279493  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:12.309764  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:12.309788  225660 cri.go:89] found id: "fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:12.309794  225660 cri.go:89] found id: ""
	I1025 09:14:12.309803  225660 logs.go:282] 2 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a]
	I1025 09:14:12.309858  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.314431  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:12.318673  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:12.318743  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:12.348251  225660 cri.go:89] found id: ""
	I1025 09:14:12.348282  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.348293  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:12.348301  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:12.348354  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:12.376469  225660 cri.go:89] found id: ""
	I1025 09:14:12.376500  225660 logs.go:282] 0 containers: []
	W1025 09:14:12.376517  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:12.376532  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:12.376543  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:12.481987  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:12.482020  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:12.501685  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:12.501719  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:12.561742  225660 logs.go:123] Gathering logs for kube-controller-manager [fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a] ...
	I1025 09:14:12.561777  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbd63b3f2fc765564ad4bedd22afb5e69961ff65a6d76d7eca2ccf42aa886c8a"
	I1025 09:14:12.595479  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:12.595510  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:12.657485  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:12.657516  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:12.724018  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:12.724046  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:12.724063  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:12.758682  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:12.758719  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:11.510510  259325 provision.go:177] copyRemoteCerts
	I1025 09:14:11.510574  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:14:11.510609  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.528759  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:11.630293  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:14:11.649620  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:14:11.667356  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:14:11.684854  259325 provision.go:87] duration metric: took 936.957621ms to configureAuth
	I1025 09:14:11.684892  259325 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:14:11.685064  259325 config.go:182] Loaded profile config "newest-cni-036155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:11.685161  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.703806  259325 main.go:141] libmachine: Using SSH client type: native
	I1025 09:14:11.704008  259325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1025 09:14:11.704026  259325 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:14:11.968181  259325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:14:11.968209  259325 machine.go:96] duration metric: took 4.726701907s to provisionDockerMachine
	I1025 09:14:11.968221  259325 client.go:171] duration metric: took 10.423315226s to LocalClient.Create
	I1025 09:14:11.968243  259325 start.go:167] duration metric: took 10.423381733s to libmachine.API.Create "newest-cni-036155"
	I1025 09:14:11.968252  259325 start.go:293] postStartSetup for "newest-cni-036155" (driver="docker")
	I1025 09:14:11.968273  259325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:14:11.968342  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:14:11.968382  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:11.988313  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.091847  259325 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:14:12.096150  259325 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:14:12.096175  259325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:14:12.096187  259325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:14:12.096246  259325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:14:12.096338  259325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:14:12.096472  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:14:12.104581  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:14:12.125866  259325 start.go:296] duration metric: took 157.598101ms for postStartSetup
	I1025 09:14:12.126207  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:12.145205  259325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/config.json ...
	I1025 09:14:12.145547  259325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:14:12.145602  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.166198  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.265965  259325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:14:12.271045  259325 start.go:128] duration metric: took 10.728656434s to createHost
	I1025 09:14:12.271079  259325 start.go:83] releasing machines lock for "newest-cni-036155", held for 10.728853828s
	I1025 09:14:12.271157  259325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-036155
	I1025 09:14:12.292688  259325 ssh_runner.go:195] Run: cat /version.json
	I1025 09:14:12.292723  259325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:14:12.292742  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.292793  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:12.314352  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.314667  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:12.483055  259325 ssh_runner.go:195] Run: systemctl --version
	I1025 09:14:12.490540  259325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:14:12.536231  259325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:14:12.541807  259325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:14:12.541870  259325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:14:12.571901  259325 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:14:12.571931  259325 start.go:495] detecting cgroup driver to use...
	I1025 09:14:12.571966  259325 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:14:12.572017  259325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:14:12.596449  259325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:14:12.611557  259325 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:14:12.611628  259325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:14:12.630533  259325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:14:12.648087  259325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:14:12.736517  259325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:14:12.839188  259325 docker.go:234] disabling docker service ...
	I1025 09:14:12.839286  259325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:14:12.859123  259325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:14:12.873528  259325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:14:12.959727  259325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:14:13.046275  259325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:14:13.059833  259325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:14:13.074282  259325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:14:13.074351  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.085056  259325 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:14:13.085131  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.094564  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.103436  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.112411  259325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:14:13.120618  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.129243  259325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.143332  259325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:14:13.152512  259325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:14:13.160145  259325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:14:13.167921  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:13.247586  259325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:14:13.369361  259325 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:14:13.369432  259325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:14:13.373738  259325 start.go:563] Will wait 60s for crictl version
	I1025 09:14:13.373798  259325 ssh_runner.go:195] Run: which crictl
	I1025 09:14:13.377873  259325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:14:13.402547  259325 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:14:13.402629  259325 ssh_runner.go:195] Run: crio --version
	I1025 09:14:13.435875  259325 ssh_runner.go:195] Run: crio --version
	I1025 09:14:13.466340  259325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:14:13.467881  259325 cli_runner.go:164] Run: docker network inspect newest-cni-036155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:14:13.486741  259325 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:14:13.491163  259325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:14:13.503996  259325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:14:13.505132  259325 kubeadm.go:883] updating cluster {Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:14:13.505308  259325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:14:13.505385  259325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:14:13.537110  259325 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:14:13.537138  259325 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:14:13.537208  259325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:14:13.565601  259325 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:14:13.565629  259325 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:14:13.565668  259325 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:14:13.565770  259325 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-036155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:14:13.565852  259325 ssh_runner.go:195] Run: crio config
	I1025 09:14:13.613362  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:13.613386  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:13.613402  259325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:14:13.613423  259325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-036155 NodeName:newest-cni-036155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:14:13.613560  259325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-036155"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:14:13.613625  259325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:14:13.621734  259325 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:14:13.621798  259325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:14:13.629658  259325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:14:13.642503  259325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:14:13.657918  259325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:14:13.670798  259325 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:14:13.674428  259325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:14:13.684203  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:13.764843  259325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:14:13.785140  259325 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155 for IP: 192.168.103.2
	I1025 09:14:13.785167  259325 certs.go:195] generating shared ca certs ...
	I1025 09:14:13.785187  259325 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:13.785344  259325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:14:13.785395  259325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:14:13.785408  259325 certs.go:257] generating profile certs ...
	I1025 09:14:13.785477  259325 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key
	I1025 09:14:13.785494  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt with IP's: []
	I1025 09:14:14.040562  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt ...
	I1025 09:14:14.040589  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.crt: {Name:mk646b8f9783dd9e4707890963ea7e898faa4fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.040796  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key ...
	I1025 09:14:14.040814  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/client.key: {Name:mkc53418ebf76ccde9e19bfb0999b44fd01a281b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.040936  259325 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f
	I1025 09:14:14.040955  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:14:14.178872  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f ...
	I1025 09:14:14.178902  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f: {Name:mk6d40b7bebb79f6059b96eb77ffd7cc4e3645e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.179108  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f ...
	I1025 09:14:14.179126  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f: {Name:mkbc6d5a1a1415943f145cdf28bbee21fccbc4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.179228  259325 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt.a5ae507f -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt
	I1025 09:14:14.179331  259325 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key.a5ae507f -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key
	I1025 09:14:14.179401  259325 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key
	I1025 09:14:14.179419  259325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt with IP's: []
	I1025 09:14:14.456160  259325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt ...
	I1025 09:14:14.456187  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt: {Name:mk6afabad4b505221210ee1843d1e445e48419a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.456387  259325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key ...
	I1025 09:14:14.456405  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key: {Name:mk8b837757d816131e1957def20b89352fbd6a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:14.456615  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:14:14.456680  259325 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:14:14.456693  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:14:14.456721  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:14:14.456755  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:14:14.456784  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:14:14.456839  259325 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:14:14.457424  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:14:14.475888  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:14:14.494249  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:14:14.512460  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:14:14.530466  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:14:14.550613  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:14:14.569632  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:14:14.588778  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/newest-cni-036155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:14:14.607411  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:14:14.627793  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:14:14.645515  259325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:14:14.662990  259325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:14:14.675778  259325 ssh_runner.go:195] Run: openssl version
	I1025 09:14:14.682117  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:14:14.690896  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.694728  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.694786  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:14:14.729366  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:14:14.738443  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:14:14.747390  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.751269  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.751325  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:14:14.787080  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:14:14.797279  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:14:14.806185  259325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.809958  259325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.810016  259325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:14:14.844458  259325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:14:14.853408  259325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:14:14.857106  259325 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:14:14.857170  259325 kubeadm.go:400] StartCluster: {Name:newest-cni-036155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-036155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:14:14.857267  259325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:14:14.857318  259325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:14:14.885203  259325 cri.go:89] found id: ""
	I1025 09:14:14.885275  259325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:14:14.894314  259325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:14:14.902526  259325 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:14:14.902581  259325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:14:14.910548  259325 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:14:14.910567  259325 kubeadm.go:157] found existing configuration files:
	
	I1025 09:14:14.910606  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:14:14.918559  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:14:14.918617  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:14:14.926037  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:14:14.933744  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:14:14.933812  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:14:14.941147  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:14:14.949023  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:14:14.949074  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:14:14.956352  259325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:14:14.963871  259325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:14:14.963917  259325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:14:14.971281  259325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:14:15.012944  259325 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:14:15.013018  259325 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:14:15.034481  259325 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:14:15.034629  259325 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:14:15.034715  259325 kubeadm.go:318] OS: Linux
	I1025 09:14:15.034799  259325 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:14:15.034865  259325 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:14:15.034941  259325 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:14:15.035026  259325 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:14:15.035104  259325 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:14:15.035174  259325 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:14:15.035234  259325 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:14:15.035306  259325 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:14:15.095588  259325 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:14:15.095759  259325 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:14:15.095880  259325 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:14:15.103017  259325 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:14:15.106084  259325 out.go:252]   - Generating certificates and keys ...
	I1025 09:14:15.106182  259325 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:14:15.106260  259325 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:14:15.271964  259325 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:14:15.313276  259325 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:14:15.508442  259325 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:14:15.535170  259325 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:14:15.844944  259325 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:14:15.845122  259325 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-036155] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:14:16.013299  259325 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:14:16.013491  259325 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-036155] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:14:16.266960  259325 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:14:12.796156  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:12.796183  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:15.330709  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:15.331131  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:15.331185  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:15.331257  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:15.361721  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:15.361747  225660 cri.go:89] found id: ""
	I1025 09:14:15.361757  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:15.361820  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.366052  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:15.366106  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:15.393921  225660 cri.go:89] found id: ""
	I1025 09:14:15.393946  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.393953  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:15.393958  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:15.394003  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:15.421456  225660 cri.go:89] found id: ""
	I1025 09:14:15.421483  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.421494  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:15.421501  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:15.421566  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:15.449595  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:15.449622  225660 cri.go:89] found id: ""
	I1025 09:14:15.449631  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:15.449706  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.453889  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:15.453971  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:15.481414  225660 cri.go:89] found id: ""
	I1025 09:14:15.481440  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.481450  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:15.481458  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:15.481532  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:15.509346  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:15.509385  225660 cri.go:89] found id: ""
	I1025 09:14:15.509395  225660 logs.go:282] 1 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692]
	I1025 09:14:15.509452  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:15.513693  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:15.513759  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:15.540722  225660 cri.go:89] found id: ""
	I1025 09:14:15.540753  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.540765  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:15.540772  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:15.540828  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:15.569576  225660 cri.go:89] found id: ""
	I1025 09:14:15.569607  225660 logs.go:282] 0 containers: []
	W1025 09:14:15.569618  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:15.569630  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:15.569659  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:15.625756  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:15.625804  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:15.657463  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:15.657491  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:15.745931  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:15.745976  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:15.761570  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:15.761599  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:15.820944  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:15.820966  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:15.820980  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:15.853603  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:15.853634  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:15.905243  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:15.905280  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:16.769058  259325 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:14:17.427908  259325 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:14:17.428076  259325 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:14:17.701563  259325 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:14:17.897864  259325 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:14:17.978230  259325 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:14:18.126870  259325 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:14:18.386586  259325 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:14:18.387355  259325 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:14:18.392686  259325 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:14:18.396175  259325 out.go:252]   - Booting up control plane ...
	I1025 09:14:18.396293  259325 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:14:18.396422  259325 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:14:18.396525  259325 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:14:18.410900  259325 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:14:18.411027  259325 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:14:18.419592  259325 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:14:18.419953  259325 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:14:18.420077  259325 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:14:18.536561  259325 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:14:18.536805  259325 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:14:20.037808  259325 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501278518s
	I1025 09:14:20.041939  259325 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:14:20.042097  259325 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1025 09:14:20.042230  259325 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:14:20.042341  259325 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:14:18.434387  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:18.434865  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:18.434925  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:18.434979  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:18.469891  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:18.469916  225660 cri.go:89] found id: ""
	I1025 09:14:18.469925  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:18.469983  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:18.473961  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:18.474038  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:18.503655  225660 cri.go:89] found id: ""
	I1025 09:14:18.503683  225660 logs.go:282] 0 containers: []
	W1025 09:14:18.503695  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:18.503703  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:18.503768  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:18.533993  225660 cri.go:89] found id: ""
	I1025 09:14:18.534022  225660 logs.go:282] 0 containers: []
	W1025 09:14:18.534033  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:18.534041  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:18.534101  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:18.570458  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:18.570484  225660 cri.go:89] found id: ""
	I1025 09:14:18.570496  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:18.570560  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:18.575721  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:18.575798  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:18.609317  225660 cri.go:89] found id: ""
	I1025 09:14:18.609399  225660 logs.go:282] 0 containers: []
	W1025 09:14:18.609414  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:18.609422  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:18.609473  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:18.643101  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:18.643123  225660 cri.go:89] found id: ""
	I1025 09:14:18.643130  225660 logs.go:282] 1 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692]
	I1025 09:14:18.643172  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:18.648324  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:18.648408  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:18.681042  225660 cri.go:89] found id: ""
	I1025 09:14:18.681072  225660 logs.go:282] 0 containers: []
	W1025 09:14:18.681083  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:18.681090  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:18.681151  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:18.715878  225660 cri.go:89] found id: ""
	I1025 09:14:18.715903  225660 logs.go:282] 0 containers: []
	W1025 09:14:18.715916  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:18.715928  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:18.715950  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:18.748571  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:18.748600  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:18.815749  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:18.815786  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:18.855575  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:18.855611  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:18.964912  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:18.964943  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:18.984560  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:18.984597  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:19.064198  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:19.064219  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:19.064231  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:19.107277  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:19.107311  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:21.676728  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:21.677138  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:21.677199  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:21.677255  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:21.709095  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:21.709123  225660 cri.go:89] found id: ""
	I1025 09:14:21.709133  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:21.709192  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:21.713497  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:21.713575  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:21.744457  225660 cri.go:89] found id: ""
	I1025 09:14:21.744479  225660 logs.go:282] 0 containers: []
	W1025 09:14:21.744486  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:21.744491  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:21.744534  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:21.775791  225660 cri.go:89] found id: ""
	I1025 09:14:21.775821  225660 logs.go:282] 0 containers: []
	W1025 09:14:21.775832  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:21.775839  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:21.775929  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:21.808501  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:21.808527  225660 cri.go:89] found id: ""
	I1025 09:14:21.808538  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:21.808600  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:21.812763  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:21.812837  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:21.844116  225660 cri.go:89] found id: ""
	I1025 09:14:21.844146  225660 logs.go:282] 0 containers: []
	W1025 09:14:21.844158  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:21.844166  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:21.844226  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:21.876400  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:21.876422  225660 cri.go:89] found id: ""
	I1025 09:14:21.876429  225660 logs.go:282] 1 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692]
	I1025 09:14:21.876492  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:21.881609  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:21.881685  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:21.913115  225660 cri.go:89] found id: ""
	I1025 09:14:21.913140  225660 logs.go:282] 0 containers: []
	W1025 09:14:21.913150  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:21.913163  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:21.913222  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:21.945411  225660 cri.go:89] found id: ""
	I1025 09:14:21.945441  225660 logs.go:282] 0 containers: []
	W1025 09:14:21.945453  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:21.945464  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:21.945485  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:21.980671  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:21.980700  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:22.044727  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:22.044760  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:22.074627  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:22.074677  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:22.132065  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:22.132098  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:22.166936  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:22.166961  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:22.278224  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:22.278256  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:22.293733  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:22.293765  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:22.362499  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:21.504564  259325 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.462277724s
	I1025 09:14:22.656135  259325 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.614236262s
	I1025 09:14:24.543972  259325 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501751672s
	I1025 09:14:24.556655  259325 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:14:24.568192  259325 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:14:24.577860  259325 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:14:24.578082  259325 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-036155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:14:24.586833  259325 kubeadm.go:318] [bootstrap-token] Using token: ojmlbq.yuvz6b74jk8hoh9z
	I1025 09:14:24.588217  259325 out.go:252]   - Configuring RBAC rules ...
	I1025 09:14:24.588369  259325 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:14:24.591843  259325 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:14:24.598265  259325 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:14:24.601185  259325 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:14:24.604260  259325 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:14:24.606875  259325 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:14:24.950954  259325 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:14:25.369563  259325 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:14:25.950142  259325 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:14:25.951105  259325 kubeadm.go:318] 
	I1025 09:14:25.951172  259325 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:14:25.951204  259325 kubeadm.go:318] 
	I1025 09:14:25.951315  259325 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:14:25.951325  259325 kubeadm.go:318] 
	I1025 09:14:25.951371  259325 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:14:25.951456  259325 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:14:25.951533  259325 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:14:25.951543  259325 kubeadm.go:318] 
	I1025 09:14:25.951636  259325 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:14:25.951668  259325 kubeadm.go:318] 
	I1025 09:14:25.951731  259325 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:14:25.951748  259325 kubeadm.go:318] 
	I1025 09:14:25.951820  259325 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:14:25.951935  259325 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:14:25.952034  259325 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:14:25.952046  259325 kubeadm.go:318] 
	I1025 09:14:25.952161  259325 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:14:25.952266  259325 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:14:25.952277  259325 kubeadm.go:318] 
	I1025 09:14:25.952397  259325 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ojmlbq.yuvz6b74jk8hoh9z \
	I1025 09:14:25.952545  259325 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:14:25.952594  259325 kubeadm.go:318] 	--control-plane 
	I1025 09:14:25.952602  259325 kubeadm.go:318] 
	I1025 09:14:25.952731  259325 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:14:25.952741  259325 kubeadm.go:318] 
	I1025 09:14:25.952844  259325 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ojmlbq.yuvz6b74jk8hoh9z \
	I1025 09:14:25.952984  259325 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:14:25.955668  259325 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:14:25.955828  259325 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:14:25.955850  259325 cni.go:84] Creating CNI manager for ""
	I1025 09:14:25.955859  259325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:14:25.957822  259325 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:14:25.959241  259325 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:14:25.964021  259325 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:14:25.964039  259325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:14:25.978189  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:14:26.193389  259325 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:14:26.193482  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:26.193503  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-036155 minikube.k8s.io/updated_at=2025_10_25T09_14_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=newest-cni-036155 minikube.k8s.io/primary=true
	I1025 09:14:26.205739  259325 ops.go:34] apiserver oom_adj: -16
	I1025 09:14:26.290892  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:24.862698  225660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:14:24.863161  225660 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1025 09:14:24.863228  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 09:14:24.863295  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 09:14:24.890540  225660 cri.go:89] found id: "987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:24.890559  225660 cri.go:89] found id: ""
	I1025 09:14:24.890567  225660 logs.go:282] 1 containers: [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba]
	I1025 09:14:24.890610  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:24.894568  225660 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 09:14:24.894628  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 09:14:24.923043  225660 cri.go:89] found id: ""
	I1025 09:14:24.923070  225660 logs.go:282] 0 containers: []
	W1025 09:14:24.923081  225660 logs.go:284] No container was found matching "etcd"
	I1025 09:14:24.923088  225660 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 09:14:24.923146  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 09:14:24.950489  225660 cri.go:89] found id: ""
	I1025 09:14:24.950517  225660 logs.go:282] 0 containers: []
	W1025 09:14:24.950527  225660 logs.go:284] No container was found matching "coredns"
	I1025 09:14:24.950534  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 09:14:24.950598  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 09:14:24.981035  225660 cri.go:89] found id: "e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:24.981058  225660 cri.go:89] found id: ""
	I1025 09:14:24.981066  225660 logs.go:282] 1 containers: [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b]
	I1025 09:14:24.981119  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:24.987835  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 09:14:24.987911  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 09:14:25.020723  225660 cri.go:89] found id: ""
	I1025 09:14:25.020751  225660 logs.go:282] 0 containers: []
	W1025 09:14:25.020763  225660 logs.go:284] No container was found matching "kube-proxy"
	I1025 09:14:25.020770  225660 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 09:14:25.020829  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 09:14:25.051770  225660 cri.go:89] found id: "0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:25.051793  225660 cri.go:89] found id: ""
	I1025 09:14:25.051802  225660 logs.go:282] 1 containers: [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692]
	I1025 09:14:25.051861  225660 ssh_runner.go:195] Run: which crictl
	I1025 09:14:25.055690  225660 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 09:14:25.055755  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 09:14:25.084107  225660 cri.go:89] found id: ""
	I1025 09:14:25.084139  225660 logs.go:282] 0 containers: []
	W1025 09:14:25.084147  225660 logs.go:284] No container was found matching "kindnet"
	I1025 09:14:25.084160  225660 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 09:14:25.084221  225660 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 09:14:25.110884  225660 cri.go:89] found id: ""
	I1025 09:14:25.110911  225660 logs.go:282] 0 containers: []
	W1025 09:14:25.110943  225660 logs.go:284] No container was found matching "storage-provisioner"
	I1025 09:14:25.110955  225660 logs.go:123] Gathering logs for dmesg ...
	I1025 09:14:25.110970  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 09:14:25.127764  225660 logs.go:123] Gathering logs for describe nodes ...
	I1025 09:14:25.127795  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 09:14:25.201049  225660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 09:14:25.201072  225660 logs.go:123] Gathering logs for kube-apiserver [987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba] ...
	I1025 09:14:25.201088  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 987eccc028756731efbecb1c85701a1420ed98912b1a8b86f051ac1243c911ba"
	I1025 09:14:25.238494  225660 logs.go:123] Gathering logs for kube-scheduler [e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b] ...
	I1025 09:14:25.238527  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e24cdcdd240e98c843eca06bac0703db40dd6955707c6985a3b380c2c852bd7b"
	I1025 09:14:25.291749  225660 logs.go:123] Gathering logs for kube-controller-manager [0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692] ...
	I1025 09:14:25.291785  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0359873b3fd75b44392a955a44adcab7d1685179d21fb5be33eb9eedc1f6d692"
	I1025 09:14:25.319077  225660 logs.go:123] Gathering logs for CRI-O ...
	I1025 09:14:25.319116  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 09:14:25.386063  225660 logs.go:123] Gathering logs for container status ...
	I1025 09:14:25.386096  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 09:14:25.420015  225660 logs.go:123] Gathering logs for kubelet ...
	I1025 09:14:25.420044  225660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 09:14:26.791915  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:27.291600  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:27.791517  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:28.291385  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:28.791733  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:29.290970  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:29.791222  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:30.291050  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:30.791323  259325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:14:30.870066  259325 kubeadm.go:1113] duration metric: took 4.676646886s to wait for elevateKubeSystemPrivileges
	I1025 09:14:30.870107  259325 kubeadm.go:402] duration metric: took 16.012945128s to StartCluster
	I1025 09:14:30.870130  259325 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:30.870221  259325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:14:30.872696  259325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:14:30.872968  259325 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:14:30.873023  259325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:14:30.873039  259325 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:14:30.873141  259325 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-036155"
	I1025 09:14:30.873147  259325 addons.go:69] Setting default-storageclass=true in profile "newest-cni-036155"
	I1025 09:14:30.873160  259325 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-036155"
	I1025 09:14:30.873168  259325 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-036155"
	I1025 09:14:30.873196  259325 host.go:66] Checking if "newest-cni-036155" exists ...
	I1025 09:14:30.873231  259325 config.go:182] Loaded profile config "newest-cni-036155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:30.873572  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:30.873793  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:30.876753  259325 out.go:179] * Verifying Kubernetes components...
	I1025 09:14:30.878077  259325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:14:30.898130  259325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:14:30.898485  259325 addons.go:238] Setting addon default-storageclass=true in "newest-cni-036155"
	I1025 09:14:30.898528  259325 host.go:66] Checking if "newest-cni-036155" exists ...
	I1025 09:14:30.899048  259325 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:14:30.899795  259325 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:14:30.899817  259325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:14:30.899882  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:30.926003  259325 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:14:30.926359  259325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:14:30.926472  259325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:14:30.930878  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:30.951738  259325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:14:30.964072  259325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:14:31.014232  259325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:14:31.045728  259325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:14:31.064244  259325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:14:31.129756  259325 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 09:14:31.131602  259325 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:14:31.131668  259325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:14:31.389134  259325 api_server.go:72] duration metric: took 516.131196ms to wait for apiserver process to appear ...
	I1025 09:14:31.389178  259325 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:14:31.389198  259325 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:14:31.394849  259325 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:14:31.395806  259325 api_server.go:141] control plane version: v1.34.1
	I1025 09:14:31.395829  259325 api_server.go:131] duration metric: took 6.644851ms to wait for apiserver health ...
	I1025 09:14:31.395838  259325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:14:31.397079  259325 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:14:31.401276  259325 addons.go:514] duration metric: took 528.232712ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:14:31.401893  259325 system_pods.go:59] 8 kube-system pods found
	I1025 09:14:31.401924  259325 system_pods.go:61] "coredns-66bc5c9577-2g5ff" [e04ebd67-9bdd-4a26-82af-101ff41eedda] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:14:31.401932  259325 system_pods.go:61] "etcd-newest-cni-036155" [d1a4d41a-121c-4e23-8f27-041b0c466f32] Running
	I1025 09:14:31.401940  259325 system_pods.go:61] "kindnet-pbnz4" [176c8540-38da-4aff-8d5f-39bf3ec9b000] Running
	I1025 09:14:31.401950  259325 system_pods.go:61] "kube-apiserver-newest-cni-036155" [b360387d-1669-4045-b183-39d03ec7a19e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:14:31.401962  259325 system_pods.go:61] "kube-controller-manager-newest-cni-036155" [cc572f9d-d43f-4f74-87b0-f11e901a3aaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:14:31.401971  259325 system_pods.go:61] "kube-proxy-6wgfs" [654d5723-8e97-4e1c-ab21-08e23e9f574e] Running
	I1025 09:14:31.401978  259325 system_pods.go:61] "kube-scheduler-newest-cni-036155" [1eb36612-7fe2-4dd1-9c4b-0340cd11256b] Running
	I1025 09:14:31.401993  259325 system_pods.go:61] "storage-provisioner" [45618c3a-b904-4a33-bac3-007339abbade] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:14:31.402004  259325 system_pods.go:74] duration metric: took 6.159065ms to wait for pod list to return data ...
	I1025 09:14:31.402018  259325 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:14:31.404724  259325 default_sa.go:45] found service account: "default"
	I1025 09:14:31.404746  259325 default_sa.go:55] duration metric: took 2.717923ms for default service account to be created ...
	I1025 09:14:31.404760  259325 kubeadm.go:586] duration metric: took 531.760819ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:14:31.404784  259325 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:14:31.407401  259325 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:14:31.407424  259325 node_conditions.go:123] node cpu capacity is 8
	I1025 09:14:31.407444  259325 node_conditions.go:105] duration metric: took 2.653925ms to run NodePressure ...
	I1025 09:14:31.407455  259325 start.go:241] waiting for startup goroutines ...
	I1025 09:14:31.635139  259325 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-036155" context rescaled to 1 replicas
	I1025 09:14:31.635174  259325 start.go:246] waiting for cluster config update ...
	I1025 09:14:31.635187  259325 start.go:255] writing updated cluster config ...
	I1025 09:14:31.635525  259325 ssh_runner.go:195] Run: rm -f paused
	I1025 09:14:31.693302  259325 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:14:31.696256  259325 out.go:179] * Done! kubectl is now configured to use "newest-cni-036155" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.092578863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.0942358Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6bf24d35-75ce-498d-9ad2-d40da9f4336f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.096888907Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.097444948Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c1c5c387-70cb-4fd4-bf64-b9c41375a417 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.097836917Z" level=info msg="Ran pod sandbox 6931e2ad5868bb075bf0e0984a21810a32281502376dced4b4904ee52ae5a7ff with infra container: kube-system/kube-proxy-6wgfs/POD" id=6bf24d35-75ce-498d-9ad2-d40da9f4336f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.099080305Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.099299092Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4905bb97-c533-4176-abce-e4be7757fc48 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.099975702Z" level=info msg="Ran pod sandbox 4e3d1f1460cd8efd004239ca1d1cbf2edd73dbcaa3904ef5ff2cb0266ddbc9f6 with infra container: kube-system/kindnet-pbnz4/POD" id=c1c5c387-70cb-4fd4-bf64-b9c41375a417 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.100663714Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=28eba589-3431-4e23-a692-922495be0632 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.101409464Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1979a034-eee0-46ae-9ac6-2aae9e872037 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.103577836Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=40d2ea5e-06a2-4969-aed3-ef186d0c4255 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.105157725Z" level=info msg="Creating container: kube-system/kube-proxy-6wgfs/kube-proxy" id=1b2eaa05-3dc4-4d70-a7d5-2e4ca3cfd294 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.105280257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.10720793Z" level=info msg="Creating container: kube-system/kindnet-pbnz4/kindnet-cni" id=326a60b2-4796-47fa-a5a0-71d5bd11b867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.107305808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.111326389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.113724389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.115682708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.116211931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.147579371Z" level=info msg="Created container f35ccc02c672ae4c61802ebe721fee1ee8edce3e776b6d44a5e0ede06d3e63fe: kube-system/kindnet-pbnz4/kindnet-cni" id=326a60b2-4796-47fa-a5a0-71d5bd11b867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.148466756Z" level=info msg="Starting container: f35ccc02c672ae4c61802ebe721fee1ee8edce3e776b6d44a5e0ede06d3e63fe" id=e3fb7651-5bbe-4271-84d0-ddc70ba8d2a5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.150908247Z" level=info msg="Created container acb142d25b8ffb0b12a0576310016fcdb4d325a2e5234f68f07c8f33f37127a2: kube-system/kube-proxy-6wgfs/kube-proxy" id=1b2eaa05-3dc4-4d70-a7d5-2e4ca3cfd294 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.151587436Z" level=info msg="Starting container: acb142d25b8ffb0b12a0576310016fcdb4d325a2e5234f68f07c8f33f37127a2" id=08dfbf1e-5852-4da6-9279-11645ffb6a48 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.15631929Z" level=info msg="Started container" PID=1620 containerID=f35ccc02c672ae4c61802ebe721fee1ee8edce3e776b6d44a5e0ede06d3e63fe description=kube-system/kindnet-pbnz4/kindnet-cni id=e3fb7651-5bbe-4271-84d0-ddc70ba8d2a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e3d1f1460cd8efd004239ca1d1cbf2edd73dbcaa3904ef5ff2cb0266ddbc9f6
	Oct 25 09:14:31 newest-cni-036155 crio[780]: time="2025-10-25T09:14:31.15687881Z" level=info msg="Started container" PID=1621 containerID=acb142d25b8ffb0b12a0576310016fcdb4d325a2e5234f68f07c8f33f37127a2 description=kube-system/kube-proxy-6wgfs/kube-proxy id=08dfbf1e-5852-4da6-9279-11645ffb6a48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6931e2ad5868bb075bf0e0984a21810a32281502376dced4b4904ee52ae5a7ff
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f35ccc02c672a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   4e3d1f1460cd8       kindnet-pbnz4                               kube-system
	acb142d25b8ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   6931e2ad5868b       kube-proxy-6wgfs                            kube-system
	e6b1c8eac0844       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   d7f46ad77149c       kube-scheduler-newest-cni-036155            kube-system
	0ad197241089d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   b063afd839301       etcd-newest-cni-036155                      kube-system
	2f744cf6ffd61       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   39026e957f0b0       kube-apiserver-newest-cni-036155            kube-system
	c3a801f6f33f7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   5fdb21bb36491       kube-controller-manager-newest-cni-036155   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-036155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-036155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=newest-cni-036155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_14_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:14:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-036155
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:14:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:14:25 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:14:25 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:14:25 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:14:25 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-036155
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a6cc6b5c-90cd-48b2-886c-5a78739a4071
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-036155                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-pbnz4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-036155             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-036155    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-6wgfs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-036155             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-036155 event: Registered Node newest-cni-036155 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [0ad197241089d8f18e87de33294c8a62132d1fe6c1f0286012b8b826a9c9c5d2] <==
	{"level":"warn","ts":"2025-10-25T09:14:21.893827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.900234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.909433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.916672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.923924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.931243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.938895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.945749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.952580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.959294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.966244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.972993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.979480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.985804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.992092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:21.998442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.005974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.012682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.023588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.037062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.043088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.061741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.068763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.076176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:22.120670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40914","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:14:33 up 57 min,  0 user,  load average: 2.13, 2.96, 2.11
	Linux newest-cni-036155 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f35ccc02c672ae4c61802ebe721fee1ee8edce3e776b6d44a5e0ede06d3e63fe] <==
	I1025 09:14:31.354940       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:14:31.439697       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:14:31.439900       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:14:31.439926       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:14:31.439953       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:14:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:14:31.640486       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:14:31.640523       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:14:31.640538       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:14:31.651009       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:14:32.040781       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:14:32.040809       1 metrics.go:72] Registering metrics
	I1025 09:14:32.040884       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [2f744cf6ffd61c61a26b7d1a95c247b94339a141b2f1e7ad4a1690679ffb0c3e] <==
	I1025 09:14:22.703182       1 policy_source.go:240] refreshing policies
	E1025 09:14:22.737672       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1025 09:14:22.786283       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:14:22.787911       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:14:22.788077       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:14:22.793296       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:14:22.793781       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:14:22.884053       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:14:23.591452       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:14:23.596828       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:14:23.596850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:14:24.092032       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:14:24.128961       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:14:24.191526       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:14:24.197906       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1025 09:14:24.199025       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:14:24.203097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:14:24.615329       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:14:25.357371       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:14:25.368602       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:14:25.376323       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:14:29.617662       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:14:30.269958       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:14:30.275065       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:14:30.768065       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c3a801f6f33f728c11842b94b6304cfb24c9ee574059fa6f43ae9e6018eb4f5d] <==
	I1025 09:14:29.577597       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:14:29.578088       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-036155" podCIDRs=["10.42.0.0/24"]
	I1025 09:14:29.584418       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:14:29.591995       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:14:29.614533       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:14:29.614667       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:14:29.614704       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:14:29.614705       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:14:29.614994       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:14:29.615027       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:14:29.615400       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:14:29.616217       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:14:29.616252       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:14:29.616266       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:14:29.616302       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:14:29.616309       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:14:29.616322       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:14:29.616367       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:14:29.616448       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:14:29.616466       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:14:29.616848       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:14:29.617799       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:14:29.618995       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:14:29.620230       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:29.641416       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [acb142d25b8ffb0b12a0576310016fcdb4d325a2e5234f68f07c8f33f37127a2] <==
	I1025 09:14:31.215722       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:14:31.282231       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:14:31.382820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:14:31.382860       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:14:31.382979       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:14:31.408542       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:14:31.408620       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:14:31.414668       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:14:31.415131       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:14:31.415220       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:31.416774       1 config.go:200] "Starting service config controller"
	I1025 09:14:31.416792       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:14:31.416805       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:14:31.416820       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:14:31.416827       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:14:31.416827       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:14:31.416884       1 config.go:309] "Starting node config controller"
	I1025 09:14:31.416891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:14:31.516937       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:14:31.517023       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:14:31.517037       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:14:31.517043       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e6b1c8eac08441425c086405ab84bc54ba9e216996500408edb3fc35382bae36] <==
	E1025 09:14:22.652527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:14:22.652594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:14:22.652792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:14:22.652834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:14:22.652911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:14:22.652971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:14:22.653006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:14:22.653110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:14:22.653190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:14:22.653707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:14:22.653745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:14:23.570627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:14:23.572659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:14:23.595960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:14:23.608319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:14:23.634409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:14:23.644770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:14:23.695592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:14:23.757151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:14:23.846627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:14:23.862159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:14:23.865350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:14:23.919761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:14:24.040403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:14:25.948883       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:14:25 newest-cni-036155 kubelet[1315]: I1025 09:14:25.391945    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b070a07de9595fc644cf8a591a3f9ac-kubeconfig\") pod \"kube-scheduler-newest-cni-036155\" (UID: \"8b070a07de9595fc644cf8a591a3f9ac\") " pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.177006    1315 apiserver.go:52] "Watching apiserver"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.183280    1315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.224362    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.225060    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.225358    1315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.230900    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-036155" podStartSLOduration=1.230876169 podStartE2EDuration="1.230876169s" podCreationTimestamp="2025-10-25 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:26.218045352 +0000 UTC m=+1.104924128" watchObservedRunningTime="2025-10-25 09:14:26.230876169 +0000 UTC m=+1.117754930"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: E1025 09:14:26.235056    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-036155\" already exists" pod="kube-system/etcd-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: E1025 09:14:26.236030    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-036155\" already exists" pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: E1025 09:14:26.236386    1315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-036155\" already exists" pod="kube-system/kube-controller-manager-newest-cni-036155"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.244443    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-036155" podStartSLOduration=1.244422625 podStartE2EDuration="1.244422625s" podCreationTimestamp="2025-10-25 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:26.231083672 +0000 UTC m=+1.117962426" watchObservedRunningTime="2025-10-25 09:14:26.244422625 +0000 UTC m=+1.131301383"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.244527    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-036155" podStartSLOduration=1.244523647 podStartE2EDuration="1.244523647s" podCreationTimestamp="2025-10-25 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:26.244424264 +0000 UTC m=+1.131303023" watchObservedRunningTime="2025-10-25 09:14:26.244523647 +0000 UTC m=+1.131402416"
	Oct 25 09:14:26 newest-cni-036155 kubelet[1315]: I1025 09:14:26.254265    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-036155" podStartSLOduration=1.254241971 podStartE2EDuration="1.254241971s" podCreationTimestamp="2025-10-25 09:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:26.254154899 +0000 UTC m=+1.141033659" watchObservedRunningTime="2025-10-25 09:14:26.254241971 +0000 UTC m=+1.141120731"
	Oct 25 09:14:29 newest-cni-036155 kubelet[1315]: I1025 09:14:29.631518    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:14:29 newest-cni-036155 kubelet[1315]: I1025 09:14:29.632368    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.831870    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/654d5723-8e97-4e1c-ab21-08e23e9f574e-xtables-lock\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.831934    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/654d5723-8e97-4e1c-ab21-08e23e9f574e-lib-modules\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.831960    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-cni-cfg\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.832038    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4msh\" (UniqueName: \"kubernetes.io/projected/654d5723-8e97-4e1c-ab21-08e23e9f574e-kube-api-access-r4msh\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.832086    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/654d5723-8e97-4e1c-ab21-08e23e9f574e-kube-proxy\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.832111    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-xtables-lock\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.832133    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76x8k\" (UniqueName: \"kubernetes.io/projected/176c8540-38da-4aff-8d5f-39bf3ec9b000-kube-api-access-76x8k\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:14:30 newest-cni-036155 kubelet[1315]: I1025 09:14:30.832155    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-lib-modules\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:14:31 newest-cni-036155 kubelet[1315]: I1025 09:14:31.266911    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pbnz4" podStartSLOduration=1.2668882510000001 podStartE2EDuration="1.266888251s" podCreationTimestamp="2025-10-25 09:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:31.251695617 +0000 UTC m=+6.138574388" watchObservedRunningTime="2025-10-25 09:14:31.266888251 +0000 UTC m=+6.153767007"
	Oct 25 09:14:31 newest-cni-036155 kubelet[1315]: I1025 09:14:31.278550    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6wgfs" podStartSLOduration=1.278524496 podStartE2EDuration="1.278524496s" podCreationTimestamp="2025-10-25 09:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:14:31.267681478 +0000 UTC m=+6.154560237" watchObservedRunningTime="2025-10-25 09:14:31.278524496 +0000 UTC m=+6.165403258"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-036155 -n newest-cni-036155
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-036155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-2g5ff storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner: exit status 1 (57.989745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-2g5ff" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-036155 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-036155 --alsologtostderr -v=1: exit status 80 (2.42087816s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-036155 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:15:06.654961  275992 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:06.655256  275992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:06.655267  275992 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:06.655271  275992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:06.655515  275992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:06.655781  275992 out.go:368] Setting JSON to false
	I1025 09:15:06.655826  275992 mustload.go:65] Loading cluster: newest-cni-036155
	I1025 09:15:06.656171  275992 config.go:182] Loaded profile config "newest-cni-036155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:06.656537  275992 cli_runner.go:164] Run: docker container inspect newest-cni-036155 --format={{.State.Status}}
	I1025 09:15:06.677895  275992 host.go:66] Checking if "newest-cni-036155" exists ...
	I1025 09:15:06.678233  275992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:06.772670  275992 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:06.758370488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:06.774065  275992 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-036155 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:15:06.777070  275992 out.go:179] * Pausing node newest-cni-036155 ... 
	I1025 09:15:06.778926  275992 host.go:66] Checking if "newest-cni-036155" exists ...
	I1025 09:15:06.779308  275992 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:06.779368  275992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-036155
	I1025 09:15:06.803106  275992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/newest-cni-036155/id_rsa Username:docker}
	I1025 09:15:06.918255  275992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:06.937705  275992 pause.go:52] kubelet running: true
	I1025 09:15:06.937775  275992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:07.104561  275992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:07.104700  275992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:07.186874  275992 cri.go:89] found id: "356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4"
	I1025 09:15:07.186909  275992 cri.go:89] found id: "def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d"
	I1025 09:15:07.186915  275992 cri.go:89] found id: "d7607334fe7386d186adb584fdb41460a2e2fd4e9355b93061104e8167acdecd"
	I1025 09:15:07.186919  275992 cri.go:89] found id: "33a0bc0f34ee99f4a348de056f809c2eea010ecb6a3c819fb1cc307364b313b0"
	I1025 09:15:07.186922  275992 cri.go:89] found id: "db0e75e0e0ccac99b7c48227dd6e67f396c1f75e83268f3b34b6f44d09dcbdb3"
	I1025 09:15:07.186925  275992 cri.go:89] found id: "59e96c6e497425eaa9001b8e1975fce360724531063a797a14e224529985ae46"
	I1025 09:15:07.186927  275992 cri.go:89] found id: ""
	I1025 09:15:07.186966  275992 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:07.200867  275992 retry.go:31] will retry after 308.616544ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:07.510428  275992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:07.526850  275992 pause.go:52] kubelet running: false
	I1025 09:15:07.526906  275992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:07.699377  275992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:07.699458  275992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:07.785855  275992 cri.go:89] found id: "356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4"
	I1025 09:15:07.785880  275992 cri.go:89] found id: "def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d"
	I1025 09:15:07.785886  275992 cri.go:89] found id: "d7607334fe7386d186adb584fdb41460a2e2fd4e9355b93061104e8167acdecd"
	I1025 09:15:07.785891  275992 cri.go:89] found id: "33a0bc0f34ee99f4a348de056f809c2eea010ecb6a3c819fb1cc307364b313b0"
	I1025 09:15:07.785894  275992 cri.go:89] found id: "db0e75e0e0ccac99b7c48227dd6e67f396c1f75e83268f3b34b6f44d09dcbdb3"
	I1025 09:15:07.785899  275992 cri.go:89] found id: "59e96c6e497425eaa9001b8e1975fce360724531063a797a14e224529985ae46"
	I1025 09:15:07.785902  275992 cri.go:89] found id: ""
	I1025 09:15:07.785977  275992 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:07.799952  275992 retry.go:31] will retry after 368.208448ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:08.168478  275992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:08.182363  275992 pause.go:52] kubelet running: false
	I1025 09:15:08.182418  275992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:08.320238  275992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:08.320330  275992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:08.391848  275992 cri.go:89] found id: "356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4"
	I1025 09:15:08.391868  275992 cri.go:89] found id: "def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d"
	I1025 09:15:08.391887  275992 cri.go:89] found id: "d7607334fe7386d186adb584fdb41460a2e2fd4e9355b93061104e8167acdecd"
	I1025 09:15:08.391899  275992 cri.go:89] found id: "33a0bc0f34ee99f4a348de056f809c2eea010ecb6a3c819fb1cc307364b313b0"
	I1025 09:15:08.391903  275992 cri.go:89] found id: "db0e75e0e0ccac99b7c48227dd6e67f396c1f75e83268f3b34b6f44d09dcbdb3"
	I1025 09:15:08.391907  275992 cri.go:89] found id: "59e96c6e497425eaa9001b8e1975fce360724531063a797a14e224529985ae46"
	I1025 09:15:08.391912  275992 cri.go:89] found id: ""
	I1025 09:15:08.391960  275992 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:08.404712  275992 retry.go:31] will retry after 363.469993ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:08Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:08.769311  275992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:08.783178  275992 pause.go:52] kubelet running: false
	I1025 09:15:08.783225  275992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:08.907999  275992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:08.908081  275992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:08.982460  275992 cri.go:89] found id: "356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4"
	I1025 09:15:08.982490  275992 cri.go:89] found id: "def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d"
	I1025 09:15:08.982496  275992 cri.go:89] found id: "d7607334fe7386d186adb584fdb41460a2e2fd4e9355b93061104e8167acdecd"
	I1025 09:15:08.982501  275992 cri.go:89] found id: "33a0bc0f34ee99f4a348de056f809c2eea010ecb6a3c819fb1cc307364b313b0"
	I1025 09:15:08.982506  275992 cri.go:89] found id: "db0e75e0e0ccac99b7c48227dd6e67f396c1f75e83268f3b34b6f44d09dcbdb3"
	I1025 09:15:08.982511  275992 cri.go:89] found id: "59e96c6e497425eaa9001b8e1975fce360724531063a797a14e224529985ae46"
	I1025 09:15:08.982516  275992 cri.go:89] found id: ""
	I1025 09:15:08.982575  275992 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:08.997366  275992 out.go:203] 
	W1025 09:15:08.998955  275992 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:15:08.998980  275992 out.go:285] * 
	* 
	W1025 09:15:09.004667  275992 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:15:09.006144  275992 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-036155 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-036155
helpers_test.go:243: (dbg) docker inspect newest-cni-036155:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb",
	        "Created": "2025-10-25T09:14:06.682120526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:52.47700497Z",
	            "FinishedAt": "2025-10-25T09:14:51.422988056Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/hostname",
	        "HostsPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/hosts",
	        "LogPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb-json.log",
	        "Name": "/newest-cni-036155",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-036155:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-036155",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb",
	                "LowerDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-036155",
	                "Source": "/var/lib/docker/volumes/newest-cni-036155/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-036155",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-036155",
	                "name.minikube.sigs.k8s.io": "newest-cni-036155",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8eb75ef37e3c1d7abe0ff323b397eced353d93f34e60cafc3ecdde4410295fa1",
	            "SandboxKey": "/var/run/docker/netns/8eb75ef37e3c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-036155": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:10:6b:78:ab:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ead30f4723103afe9c35f4580c74d2202de41578e0480b83c36d81600895331e",
	                    "EndpointID": "62e8683c28a3574136f703f1ccd586d0c09409f03b6c8c93206e7b30c2ff8e74",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-036155",
	                        "09a0c00b2999"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155: exit status 2 (355.06831ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-036155 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-036155 logs -n 25: (1.038354236s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-891466 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p newest-cni-036155 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ image   │ newest-cni-036155 image list --format=json                                                                                                                                                                                                    │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p newest-cni-036155 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:07.634591  276315 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:07.635180  276315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:07.635233  276315 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:07.635243  276315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:07.635778  276315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:07.636815  276315 out.go:368] Setting JSON to false
	I1025 09:15:07.638563  276315 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3456,"bootTime":1761380252,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:07.638652  276315 start.go:141] virtualization: kvm guest
	I1025 09:15:07.639922  276315 out.go:179] * [kubernetes-upgrade-497496] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:07.642048  276315 notify.go:220] Checking for updates...
	I1025 09:15:07.642079  276315 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:15:07.643538  276315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:07.644782  276315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:07.646166  276315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:15:07.647516  276315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:07.649188  276315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:07.650918  276315 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:07.651441  276315 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:07.681738  276315 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:07.681878  276315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:07.749532  276315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:07.737518749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:07.749694  276315 docker.go:318] overlay module found
	I1025 09:15:07.752019  276315 out.go:179] * Using the docker driver based on existing profile
	I1025 09:15:07.753378  276315 start.go:305] selected driver: docker
	I1025 09:15:07.753397  276315 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-497496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-497496 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:07.753531  276315 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:07.754325  276315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:07.819301  276315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:07.807276662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:07.819612  276315 cni.go:84] Creating CNI manager for ""
	I1025 09:15:07.819723  276315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:07.819774  276315 start.go:349] cluster config:
	{Name:kubernetes-upgrade-497496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-497496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:07.821924  276315 out.go:179] * Starting "kubernetes-upgrade-497496" primary control-plane node in "kubernetes-upgrade-497496" cluster
	I1025 09:15:07.823352  276315 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:15:07.824615  276315 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:07.825893  276315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:07.825948  276315 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:15:07.825963  276315 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:07.826026  276315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:07.826063  276315 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:07.826076  276315 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:15:07.826233  276315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/config.json ...
	I1025 09:15:07.848031  276315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:15:07.848050  276315 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:15:07.848068  276315 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:15:07.848113  276315 start.go:360] acquireMachinesLock for kubernetes-upgrade-497496: {Name:mk323312a51215d34a9376630de8014b7448c3f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:15:07.848177  276315 start.go:364] duration metric: took 39.887µs to acquireMachinesLock for "kubernetes-upgrade-497496"
	I1025 09:15:07.848200  276315 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:15:07.848209  276315 fix.go:54] fixHost starting: 
	I1025 09:15:07.848428  276315 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-497496 --format={{.State.Status}}
	I1025 09:15:07.867714  276315 fix.go:112] recreateIfNeeded on kubernetes-upgrade-497496: state=Running err=<nil>
	W1025 09:15:07.867750  276315 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:15:04.389950  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:06.390064  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:08.889961  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.727898581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.731945378Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7c701dd5-6745-4965-b3fc-c166dbd21d65 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.73318461Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=38e3b77b-95f4-42d5-a5db-37e9186c6bd9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.736248737Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.737789819Z" level=info msg="Ran pod sandbox 507eaab8ca59b3fceed1c52f963d1f824b02fdf5e2295714bef152daf2a349a3 with infra container: kube-system/kindnet-pbnz4/POD" id=38e3b77b-95f4-42d5-a5db-37e9186c6bd9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.737914386Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.73948329Z" level=info msg="Ran pod sandbox 7b0fa422f0e805a1bdf89b9420c3fd35615eb1db8b9477d7e5bd96c575c39d28 with infra container: kube-system/kube-proxy-6wgfs/POD" id=7c701dd5-6745-4965-b3fc-c166dbd21d65 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.739579838Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=825cb963-fe5d-4ccb-b7cd-028135ec02ce name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.740906851Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=08446422-5788-4ff2-9fa6-2661661341a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.740964256Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dadf6067-25c3-4618-be61-15c647bbf474 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.741802078Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=17b57e8c-20b1-465c-8427-bccbf0de7c8b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.742388502Z" level=info msg="Creating container: kube-system/kindnet-pbnz4/kindnet-cni" id=8c1b01d5-4e02-417e-aa39-42256100a012 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.742662175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.74278831Z" level=info msg="Creating container: kube-system/kube-proxy-6wgfs/kube-proxy" id=96d74db4-a9f4-4712-a2a6-104a6f13920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.742883895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.749003736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.749745124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.751804637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.752415967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.784240692Z" level=info msg="Created container def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d: kube-system/kindnet-pbnz4/kindnet-cni" id=8c1b01d5-4e02-417e-aa39-42256100a012 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.785708751Z" level=info msg="Starting container: def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d" id=00ef8457-2f85-477d-8b9d-62de8fb5620e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.788289974Z" level=info msg="Created container 356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4: kube-system/kube-proxy-6wgfs/kube-proxy" id=96d74db4-a9f4-4712-a2a6-104a6f13920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.78862768Z" level=info msg="Started container" PID=1039 containerID=def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d description=kube-system/kindnet-pbnz4/kindnet-cni id=00ef8457-2f85-477d-8b9d-62de8fb5620e name=/runtime.v1.RuntimeService/StartContainer sandboxID=507eaab8ca59b3fceed1c52f963d1f824b02fdf5e2295714bef152daf2a349a3
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.789112464Z" level=info msg="Starting container: 356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4" id=b6e17c33-495c-4af6-a9a1-6db03033f3d1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.792718123Z" level=info msg="Started container" PID=1040 containerID=356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4 description=kube-system/kube-proxy-6wgfs/kube-proxy id=b6e17c33-495c-4af6-a9a1-6db03033f3d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b0fa422f0e805a1bdf89b9420c3fd35615eb1db8b9477d7e5bd96c575c39d28
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	356e11399aaa8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   7b0fa422f0e80       kube-proxy-6wgfs                            kube-system
	def8635c72497       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   507eaab8ca59b       kindnet-pbnz4                               kube-system
	d7607334fe738       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   e4744c57719b1       kube-controller-manager-newest-cni-036155   kube-system
	33a0bc0f34ee9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   09020458b738f       kube-scheduler-newest-cni-036155            kube-system
	db0e75e0e0cca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   ee21fa608bd90       etcd-newest-cni-036155                      kube-system
	59e96c6e49742       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   a59c58ae83115       kube-apiserver-newest-cni-036155            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-036155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-036155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=newest-cni-036155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_14_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:14:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-036155
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:15:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-036155
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a6cc6b5c-90cd-48b2-886c-5a78739a4071
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-036155                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         45s
	  kube-system                 kindnet-pbnz4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      40s
	  kube-system                 kube-apiserver-newest-cni-036155             250m (3%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-newest-cni-036155    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-6wgfs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-scheduler-newest-cni-036155             100m (1%)     0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s                kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s                kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s                kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node newest-cni-036155 event: Registered Node newest-cni-036155 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)    kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-036155 event: Registered Node newest-cni-036155 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [db0e75e0e0ccac99b7c48227dd6e67f396c1f75e83268f3b34b6f44d09dcbdb3] <==
	{"level":"warn","ts":"2025-10-25T09:15:03.563718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.570855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.577865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.589595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.598228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.605237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.618691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.624978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.631261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.638587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.646465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.659219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.669275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.673004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.680830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.695044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.701588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.707921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.717275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.724435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.730625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.742736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.749247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.755758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.817932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:15:10 up 57 min,  0 user,  load average: 3.19, 3.11, 2.20
	Linux newest-cni-036155 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d] <==
	I1025 09:15:04.976443       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:15:04.976708       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:15:04.976813       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:15:04.976827       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:15:04.976847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:15:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:15:05.276535       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:15:05.369520       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:15:05.369667       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:15:05.369885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:15:05.669882       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:15:05.669920       1 metrics.go:72] Registering metrics
	I1025 09:15:05.670029       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [59e96c6e497425eaa9001b8e1975fce360724531063a797a14e224529985ae46] <==
	I1025 09:15:04.296849       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:15:04.297290       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:15:04.297497       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:15:04.298201       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:15:04.298222       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:15:04.298229       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:15:04.298236       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:15:04.299032       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:15:04.299115       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:15:04.299419       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:15:04.299474       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:15:04.312189       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:15:04.327940       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:15:04.522846       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:15:04.638961       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:15:04.671926       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:15:04.694564       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:15:04.703997       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:15:04.764181       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.230.85"}
	I1025 09:15:04.781930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.193.197"}
	I1025 09:15:05.198755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:15:07.976978       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:15:08.027925       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:15:08.126954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:15:08.126954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d7607334fe7386d186adb584fdb41460a2e2fd4e9355b93061104e8167acdecd] <==
	I1025 09:15:07.602797       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:15:07.605108       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:15:07.616419       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:15:07.622933       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:15:07.623260       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:15:07.623377       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:15:07.623566       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:15:07.623659       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:15:07.623667       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:15:07.624121       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:15:07.624289       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:15:07.629324       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:15:07.629345       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:15:07.629362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:15:07.630459       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:15:07.634070       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:15:07.636010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:15:07.641218       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-036155"
	I1025 09:15:07.641373       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:15:07.646340       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:15:07.648978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:15:07.650069       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:15:07.653238       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:15:07.659444       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:15:07.669779       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4] <==
	I1025 09:15:04.833461       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:15:04.903967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:15:05.004933       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:15:05.004980       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:15:05.005143       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:15:05.025538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:15:05.025610       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:15:05.031493       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:15:05.032003       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:15:05.032037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:15:05.033347       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:15:05.033368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:15:05.033443       1 config.go:200] "Starting service config controller"
	I1025 09:15:05.033555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:15:05.033671       1 config.go:309] "Starting node config controller"
	I1025 09:15:05.033695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:15:05.033703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:15:05.033586       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:15:05.033711       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:15:05.133906       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:15:05.133942       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:15:05.133953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33a0bc0f34ee99f4a348de056f809c2eea010ecb6a3c819fb1cc307364b313b0] <==
	W1025 09:15:04.208182       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:15:04.251185       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:15:04.251277       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:15:04.258568       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:15:04.258809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:15:04.259051       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:15:04.258848       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:15:04.264881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:15:04.265002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:15:04.270134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:15:04.270374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:15:04.270578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:15:04.270724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:15:04.270945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:15:04.271166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:15:04.271167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:15:04.277257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:15:04.280033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:15:04.280356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:15:04.281502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:15:04.282423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:15:04.283098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:15:04.283352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:15:04.285620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 09:15:04.359206       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:15:03 newest-cni-036155 kubelet[665]: E1025 09:15:03.493518     665 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-036155\" not found" node="newest-cni-036155"
	Oct 25 09:15:03 newest-cni-036155 kubelet[665]: E1025 09:15:03.543466     665 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-036155\" not found" node="newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.224186     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.296558     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-036155\" already exists" pod="kube-system/etcd-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.296613     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.313747     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-036155\" already exists" pod="kube-system/kube-apiserver-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.313930     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.320808     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-036155\" already exists" pod="kube-system/kube-controller-manager-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.320849     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.321204     665 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.321430     665 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.321539     665 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.322751     665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.328618     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-036155\" already exists" pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.417506     665 apiserver.go:52] "Watching apiserver"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.427274     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518109     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/654d5723-8e97-4e1c-ab21-08e23e9f574e-xtables-lock\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518168     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-cni-cfg\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518313     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-xtables-lock\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518376     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-lib-modules\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518469     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/654d5723-8e97-4e1c-ab21-08e23e9f574e-lib-modules\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:15:07 newest-cni-036155 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:15:07 newest-cni-036155 kubelet[665]: I1025 09:15:07.082181     665 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 09:15:07 newest-cni-036155 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:15:07 newest-cni-036155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-036155 -n newest-cni-036155
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-036155 -n newest-cni-036155: exit status 2 (353.993914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-036155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5: exit status 1 (68.747942ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-2g5ff" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-986bs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c9sj5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-036155
helpers_test.go:243: (dbg) docker inspect newest-cni-036155:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb",
	        "Created": "2025-10-25T09:14:06.682120526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:52.47700497Z",
	            "FinishedAt": "2025-10-25T09:14:51.422988056Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/hostname",
	        "HostsPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/hosts",
	        "LogPath": "/var/lib/docker/containers/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb/09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb-json.log",
	        "Name": "/newest-cni-036155",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-036155:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-036155",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09a0c00b29998bc7af4ec11c7a125501a3fe40674e51eb6ba90db972593a7beb",
	                "LowerDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31642d72dc2b3230e0ba8b24fcb247f758923abad0b14c96b7b408d219eae0d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-036155",
	                "Source": "/var/lib/docker/volumes/newest-cni-036155/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-036155",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-036155",
	                "name.minikube.sigs.k8s.io": "newest-cni-036155",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8eb75ef37e3c1d7abe0ff323b397eced353d93f34e60cafc3ecdde4410295fa1",
	            "SandboxKey": "/var/run/docker/netns/8eb75ef37e3c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-036155": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:10:6b:78:ab:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ead30f4723103afe9c35f4580c74d2202de41578e0480b83c36d81600895331e",
	                    "EndpointID": "62e8683c28a3574136f703f1ccd586d0c09409f03b6c8c93206e7b30c2ff8e74",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-036155",
	                        "09a0c00b2999"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155: exit status 2 (375.002023ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-036155 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-036155 logs -n 25: (1.091443849s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p cert-expiration-851718                                                                                                                                                                                                                     │ cert-expiration-851718       │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ delete  │ -p disable-driver-mounts-664368                                                                                                                                                                                                               │ disable-driver-mounts-664368 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ image   │ no-preload-016092 image list --format=json                                                                                                                                                                                                    │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:13 UTC │
	│ pause   │ -p no-preload-016092 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │                     │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:13 UTC │ 25 Oct 25 09:14 UTC │
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-891466 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p newest-cni-036155 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ image   │ newest-cni-036155 image list --format=json                                                                                                                                                                                                    │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p newest-cni-036155 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:07.634591  276315 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:07.635180  276315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:07.635233  276315 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:07.635243  276315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:07.635778  276315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:07.636815  276315 out.go:368] Setting JSON to false
	I1025 09:15:07.638563  276315 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3456,"bootTime":1761380252,"procs":346,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:07.638652  276315 start.go:141] virtualization: kvm guest
	I1025 09:15:07.639922  276315 out.go:179] * [kubernetes-upgrade-497496] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:07.642048  276315 notify.go:220] Checking for updates...
	I1025 09:15:07.642079  276315 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:15:07.643538  276315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:07.644782  276315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:07.646166  276315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:15:07.647516  276315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:07.649188  276315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:07.650918  276315 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:07.651441  276315 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:07.681738  276315 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:07.681878  276315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:07.749532  276315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:07.737518749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:07.749694  276315 docker.go:318] overlay module found
	I1025 09:15:07.752019  276315 out.go:179] * Using the docker driver based on existing profile
	I1025 09:15:07.753378  276315 start.go:305] selected driver: docker
	I1025 09:15:07.753397  276315 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-497496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-497496 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:07.753531  276315 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:07.754325  276315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:07.819301  276315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:07.807276662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:07.819612  276315 cni.go:84] Creating CNI manager for ""
	I1025 09:15:07.819723  276315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:07.819774  276315 start.go:349] cluster config:
	{Name:kubernetes-upgrade-497496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-497496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:07.821924  276315 out.go:179] * Starting "kubernetes-upgrade-497496" primary control-plane node in "kubernetes-upgrade-497496" cluster
	I1025 09:15:07.823352  276315 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:15:07.824615  276315 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:07.825893  276315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:07.825948  276315 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:15:07.825963  276315 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:07.826026  276315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:07.826063  276315 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:07.826076  276315 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:15:07.826233  276315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kubernetes-upgrade-497496/config.json ...
	I1025 09:15:07.848031  276315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:15:07.848050  276315 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:15:07.848068  276315 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:15:07.848113  276315 start.go:360] acquireMachinesLock for kubernetes-upgrade-497496: {Name:mk323312a51215d34a9376630de8014b7448c3f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:15:07.848177  276315 start.go:364] duration metric: took 39.887µs to acquireMachinesLock for "kubernetes-upgrade-497496"
	I1025 09:15:07.848200  276315 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:15:07.848209  276315 fix.go:54] fixHost starting: 
	I1025 09:15:07.848428  276315 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-497496 --format={{.State.Status}}
	I1025 09:15:07.867714  276315 fix.go:112] recreateIfNeeded on kubernetes-upgrade-497496: state=Running err=<nil>
	W1025 09:15:07.867750  276315 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:15:04.389950  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:06.390064  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:08.889961  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:07.869772  276315 out.go:252] * Updating the running docker "kubernetes-upgrade-497496" container ...
	I1025 09:15:07.869812  276315 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:07.869887  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:07.890148  276315 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:07.890377  276315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:15:07.890390  276315 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:08.035948  276315 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-497496
	
	I1025 09:15:08.035992  276315 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-497496"
	I1025 09:15:08.036064  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:08.057677  276315 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:08.057977  276315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:15:08.057998  276315 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-497496 && echo "kubernetes-upgrade-497496" | sudo tee /etc/hostname
	I1025 09:15:08.218255  276315 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-497496
	
	I1025 09:15:08.218345  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:08.247195  276315 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:08.247654  276315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:15:08.247687  276315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-497496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-497496/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-497496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:08.401586  276315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:08.401623  276315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:08.401670  276315 ubuntu.go:190] setting up certificates
	I1025 09:15:08.401683  276315 provision.go:84] configureAuth start
	I1025 09:15:08.401751  276315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-497496
	I1025 09:15:08.422060  276315 provision.go:143] copyHostCerts
	I1025 09:15:08.422131  276315 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:08.422149  276315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:08.422241  276315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:08.422356  276315 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:08.422365  276315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:08.422394  276315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:08.422499  276315 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:08.422510  276315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:08.422535  276315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:08.422586  276315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-497496 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-497496 localhost minikube]
	I1025 09:15:08.545385  276315 provision.go:177] copyRemoteCerts
	I1025 09:15:08.545443  276315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:08.545482  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:08.565110  276315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kubernetes-upgrade-497496/id_rsa Username:docker}
	I1025 09:15:08.669259  276315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:08.688788  276315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 09:15:08.708604  276315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:15:08.726423  276315 provision.go:87] duration metric: took 324.722715ms to configureAuth
	I1025 09:15:08.726454  276315 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:08.726680  276315 config.go:182] Loaded profile config "kubernetes-upgrade-497496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:08.726776  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:08.745689  276315 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:08.745896  276315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:15:08.745911  276315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:09.289885  276315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:09.289911  276315 machine.go:96] duration metric: took 1.420089558s to provisionDockerMachine
	I1025 09:15:09.289924  276315 start.go:293] postStartSetup for "kubernetes-upgrade-497496" (driver="docker")
	I1025 09:15:09.289936  276315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:09.289995  276315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:09.290045  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:09.311846  276315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kubernetes-upgrade-497496/id_rsa Username:docker}
	I1025 09:15:09.422341  276315 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:09.426903  276315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:09.426933  276315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:09.426943  276315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:09.427000  276315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:09.427090  276315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:09.427200  276315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:09.435853  276315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:09.454750  276315 start.go:296] duration metric: took 164.813289ms for postStartSetup
	I1025 09:15:09.454857  276315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:09.454918  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:09.475817  276315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kubernetes-upgrade-497496/id_rsa Username:docker}
	I1025 09:15:09.578983  276315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:09.584711  276315 fix.go:56] duration metric: took 1.736494597s for fixHost
	I1025 09:15:09.584742  276315 start.go:83] releasing machines lock for "kubernetes-upgrade-497496", held for 1.736549757s
	I1025 09:15:09.584811  276315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-497496
	I1025 09:15:09.605131  276315 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:09.605190  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:09.605211  276315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:09.605274  276315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-497496
	I1025 09:15:09.626507  276315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kubernetes-upgrade-497496/id_rsa Username:docker}
	I1025 09:15:09.626774  276315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kubernetes-upgrade-497496/id_rsa Username:docker}
	I1025 09:15:09.787214  276315 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:09.794829  276315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:09.839123  276315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:09.844042  276315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:09.844110  276315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:09.853103  276315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:15:09.853143  276315 start.go:495] detecting cgroup driver to use...
	I1025 09:15:09.853180  276315 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:09.853225  276315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:09.870543  276315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:09.885209  276315 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:09.885292  276315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:09.904718  276315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:09.918837  276315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:10.028883  276315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:10.134062  276315 docker.go:234] disabling docker service ...
	I1025 09:15:10.134124  276315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:10.150708  276315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:10.168960  276315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:10.291696  276315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:10.399225  276315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:10.414092  276315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:10.429809  276315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:10.429871  276315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.441269  276315 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:10.441332  276315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.453016  276315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.463196  276315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.473452  276315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:10.483885  276315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.495037  276315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.505273  276315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:10.516934  276315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:10.526411  276315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:10.534508  276315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:10.641266  276315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:10.795881  276315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:10.795947  276315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:10.800302  276315 start.go:563] Will wait 60s for crictl version
	I1025 09:15:10.800371  276315 ssh_runner.go:195] Run: which crictl
	I1025 09:15:10.805035  276315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:10.831197  276315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:10.831282  276315 ssh_runner.go:195] Run: crio --version
	I1025 09:15:10.862411  276315 ssh_runner.go:195] Run: crio --version
	I1025 09:15:10.897147  276315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.727898581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.731945378Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7c701dd5-6745-4965-b3fc-c166dbd21d65 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.73318461Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=38e3b77b-95f4-42d5-a5db-37e9186c6bd9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.736248737Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.737789819Z" level=info msg="Ran pod sandbox 507eaab8ca59b3fceed1c52f963d1f824b02fdf5e2295714bef152daf2a349a3 with infra container: kube-system/kindnet-pbnz4/POD" id=38e3b77b-95f4-42d5-a5db-37e9186c6bd9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.737914386Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.73948329Z" level=info msg="Ran pod sandbox 7b0fa422f0e805a1bdf89b9420c3fd35615eb1db8b9477d7e5bd96c575c39d28 with infra container: kube-system/kube-proxy-6wgfs/POD" id=7c701dd5-6745-4965-b3fc-c166dbd21d65 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.739579838Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=825cb963-fe5d-4ccb-b7cd-028135ec02ce name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.740906851Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=08446422-5788-4ff2-9fa6-2661661341a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.740964256Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dadf6067-25c3-4618-be61-15c647bbf474 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.741802078Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=17b57e8c-20b1-465c-8427-bccbf0de7c8b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.742388502Z" level=info msg="Creating container: kube-system/kindnet-pbnz4/kindnet-cni" id=8c1b01d5-4e02-417e-aa39-42256100a012 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.742662175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.74278831Z" level=info msg="Creating container: kube-system/kube-proxy-6wgfs/kube-proxy" id=96d74db4-a9f4-4712-a2a6-104a6f13920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.742883895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.749003736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.749745124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.751804637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.752415967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.784240692Z" level=info msg="Created container def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d: kube-system/kindnet-pbnz4/kindnet-cni" id=8c1b01d5-4e02-417e-aa39-42256100a012 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.785708751Z" level=info msg="Starting container: def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d" id=00ef8457-2f85-477d-8b9d-62de8fb5620e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.788289974Z" level=info msg="Created container 356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4: kube-system/kube-proxy-6wgfs/kube-proxy" id=96d74db4-a9f4-4712-a2a6-104a6f13920e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.78862768Z" level=info msg="Started container" PID=1039 containerID=def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d description=kube-system/kindnet-pbnz4/kindnet-cni id=00ef8457-2f85-477d-8b9d-62de8fb5620e name=/runtime.v1.RuntimeService/StartContainer sandboxID=507eaab8ca59b3fceed1c52f963d1f824b02fdf5e2295714bef152daf2a349a3
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.789112464Z" level=info msg="Starting container: 356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4" id=b6e17c33-495c-4af6-a9a1-6db03033f3d1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:04 newest-cni-036155 crio[519]: time="2025-10-25T09:15:04.792718123Z" level=info msg="Started container" PID=1040 containerID=356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4 description=kube-system/kube-proxy-6wgfs/kube-proxy id=b6e17c33-495c-4af6-a9a1-6db03033f3d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b0fa422f0e805a1bdf89b9420c3fd35615eb1db8b9477d7e5bd96c575c39d28
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	356e11399aaa8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   7b0fa422f0e80       kube-proxy-6wgfs                            kube-system
	def8635c72497       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   507eaab8ca59b       kindnet-pbnz4                               kube-system
	d7607334fe738       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   e4744c57719b1       kube-controller-manager-newest-cni-036155   kube-system
	33a0bc0f34ee9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   09020458b738f       kube-scheduler-newest-cni-036155            kube-system
	db0e75e0e0cca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   ee21fa608bd90       etcd-newest-cni-036155                      kube-system
	59e96c6e49742       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   a59c58ae83115       kube-apiserver-newest-cni-036155            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-036155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-036155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=newest-cni-036155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_14_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:14:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-036155
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:15:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:15:04 +0000   Sat, 25 Oct 2025 09:14:20 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-036155
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a6cc6b5c-90cd-48b2-886c-5a78739a4071
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-036155                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         47s
	  kube-system                 kindnet-pbnz4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      42s
	  kube-system                 kube-apiserver-newest-cni-036155             250m (3%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-controller-manager-newest-cni-036155    200m (2%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-6wgfs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-scheduler-newest-cni-036155             100m (1%)     0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 7s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s                kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s                kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s                kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node newest-cni-036155 event: Registered Node newest-cni-036155 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-036155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-036155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-036155 event: Registered Node newest-cni-036155 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [db0e75e0e0ccac99b7c48227dd6e67f396c1f75e83268f3b34b6f44d09dcbdb3] <==
	{"level":"warn","ts":"2025-10-25T09:15:03.563718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.570855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.577865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.589595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.598228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.605237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.618691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.624978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.631261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.638587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.646465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.659219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.669275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.673004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.680830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.695044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.701588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.707921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.717275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.724435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.730625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.742736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.749247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.755758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:03.817932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:15:12 up 57 min,  0 user,  load average: 3.19, 3.11, 2.20
	Linux newest-cni-036155 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [def8635c724977ce8881254d02772b0b60dbfd31bd4446267a0569b07c368a1d] <==
	I1025 09:15:04.976443       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:15:04.976708       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:15:04.976813       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:15:04.976827       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:15:04.976847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:15:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:15:05.276535       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:15:05.369520       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:15:05.369667       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:15:05.369885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:15:05.669882       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:15:05.669920       1 metrics.go:72] Registering metrics
	I1025 09:15:05.670029       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [59e96c6e497425eaa9001b8e1975fce360724531063a797a14e224529985ae46] <==
	I1025 09:15:04.296849       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:15:04.297290       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:15:04.297497       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:15:04.298201       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:15:04.298222       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:15:04.298229       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:15:04.298236       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:15:04.299032       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:15:04.299115       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:15:04.299419       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:15:04.299474       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:15:04.312189       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:15:04.327940       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:15:04.522846       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:15:04.638961       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:15:04.671926       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:15:04.694564       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:15:04.703997       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:15:04.764181       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.230.85"}
	I1025 09:15:04.781930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.193.197"}
	I1025 09:15:05.198755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:15:07.976978       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:15:08.027925       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:15:08.126954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:15:08.126954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d7607334fe7386d186adb584fdb41460a2e2fd4e9355b93061104e8167acdecd] <==
	I1025 09:15:07.602797       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:15:07.605108       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:15:07.616419       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:15:07.622933       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:15:07.623260       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:15:07.623377       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:15:07.623566       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:15:07.623659       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:15:07.623667       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:15:07.624121       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:15:07.624289       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:15:07.629324       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:15:07.629345       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:15:07.629362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:15:07.630459       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:15:07.634070       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:15:07.636010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:15:07.641218       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-036155"
	I1025 09:15:07.641373       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:15:07.646340       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:15:07.648978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:15:07.650069       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:15:07.653238       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:15:07.659444       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:15:07.669779       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [356e11399aaa854a91c73663dbd137fd0aed6398c67418cd698a5fe62a757ae4] <==
	I1025 09:15:04.833461       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:15:04.903967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:15:05.004933       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:15:05.004980       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:15:05.005143       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:15:05.025538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:15:05.025610       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:15:05.031493       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:15:05.032003       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:15:05.032037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:15:05.033347       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:15:05.033368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:15:05.033443       1 config.go:200] "Starting service config controller"
	I1025 09:15:05.033555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:15:05.033671       1 config.go:309] "Starting node config controller"
	I1025 09:15:05.033695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:15:05.033703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:15:05.033586       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:15:05.033711       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:15:05.133906       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:15:05.133942       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:15:05.133953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33a0bc0f34ee99f4a348de056f809c2eea010ecb6a3c819fb1cc307364b313b0] <==
	W1025 09:15:04.208182       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:15:04.251185       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:15:04.251277       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:15:04.258568       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:15:04.258809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:15:04.259051       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:15:04.258848       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:15:04.264881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:15:04.265002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:15:04.270134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:15:04.270374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:15:04.270578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:15:04.270724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:15:04.270945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:15:04.271166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:15:04.271167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:15:04.277257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:15:04.280033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:15:04.280356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:15:04.281502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:15:04.282423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:15:04.283098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:15:04.283352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:15:04.285620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 09:15:04.359206       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:15:03 newest-cni-036155 kubelet[665]: E1025 09:15:03.493518     665 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-036155\" not found" node="newest-cni-036155"
	Oct 25 09:15:03 newest-cni-036155 kubelet[665]: E1025 09:15:03.543466     665 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-036155\" not found" node="newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.224186     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.296558     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-036155\" already exists" pod="kube-system/etcd-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.296613     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.313747     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-036155\" already exists" pod="kube-system/kube-apiserver-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.313930     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.320808     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-036155\" already exists" pod="kube-system/kube-controller-manager-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.320849     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.321204     665 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.321430     665 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.321539     665 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.322751     665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: E1025 09:15:04.328618     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-036155\" already exists" pod="kube-system/kube-scheduler-newest-cni-036155"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.417506     665 apiserver.go:52] "Watching apiserver"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.427274     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518109     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/654d5723-8e97-4e1c-ab21-08e23e9f574e-xtables-lock\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518168     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-cni-cfg\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518313     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-xtables-lock\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518376     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/176c8540-38da-4aff-8d5f-39bf3ec9b000-lib-modules\") pod \"kindnet-pbnz4\" (UID: \"176c8540-38da-4aff-8d5f-39bf3ec9b000\") " pod="kube-system/kindnet-pbnz4"
	Oct 25 09:15:04 newest-cni-036155 kubelet[665]: I1025 09:15:04.518469     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/654d5723-8e97-4e1c-ab21-08e23e9f574e-lib-modules\") pod \"kube-proxy-6wgfs\" (UID: \"654d5723-8e97-4e1c-ab21-08e23e9f574e\") " pod="kube-system/kube-proxy-6wgfs"
	Oct 25 09:15:07 newest-cni-036155 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:15:07 newest-cni-036155 kubelet[665]: I1025 09:15:07.082181     665 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 09:15:07 newest-cni-036155 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:15:07 newest-cni-036155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-036155 -n newest-cni-036155
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-036155 -n newest-cni-036155: exit status 2 (381.058152ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-036155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5: exit status 1 (68.67257ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-2g5ff" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-986bs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c9sj5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-036155 describe pod coredns-66bc5c9577-2g5ff storage-provisioner dashboard-metrics-scraper-6ffb444bf9-986bs kubernetes-dashboard-855c9754f9-c9sj5: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-106968 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-106968 --alsologtostderr -v=1: exit status 80 (1.781730778s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-106968 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:15:36.403747  284632 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:36.403969  284632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:36.403977  284632 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:36.403981  284632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:36.404186  284632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:36.404426  284632 out.go:368] Setting JSON to false
	I1025 09:15:36.404477  284632 mustload.go:65] Loading cluster: embed-certs-106968
	I1025 09:15:36.404866  284632 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:36.405267  284632 cli_runner.go:164] Run: docker container inspect embed-certs-106968 --format={{.State.Status}}
	I1025 09:15:36.425741  284632 host.go:66] Checking if "embed-certs-106968" exists ...
	I1025 09:15:36.426069  284632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:36.490184  284632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:36.4791625 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:36.490948  284632 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-106968 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:15:36.492675  284632 out.go:179] * Pausing node embed-certs-106968 ... 
	I1025 09:15:36.493819  284632 host.go:66] Checking if "embed-certs-106968" exists ...
	I1025 09:15:36.494090  284632 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:36.494143  284632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-106968
	I1025 09:15:36.514832  284632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/embed-certs-106968/id_rsa Username:docker}
	I1025 09:15:36.620607  284632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:36.634398  284632 pause.go:52] kubelet running: true
	I1025 09:15:36.634475  284632 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:36.839546  284632 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:36.839679  284632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:36.924436  284632 cri.go:89] found id: "3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f"
	I1025 09:15:36.924465  284632 cri.go:89] found id: "0553f0bb1ffb9292e667528ee940875c401cef5ffdc7d9d0b2a6254ea2f48bb4"
	I1025 09:15:36.924471  284632 cri.go:89] found id: "b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6"
	I1025 09:15:36.924476  284632 cri.go:89] found id: "7a79aee2c4047ff17a490493c6fabf5d9bf45c412c892472070caeb72cab191d"
	I1025 09:15:36.924480  284632 cri.go:89] found id: "c7f9b2e31210a0e8cec194cd09bb4249f8bdfccefdcdfc0247b7045f2826a78c"
	I1025 09:15:36.924484  284632 cri.go:89] found id: "c648a3db147adba437828b8bb877ee3ed46dad5ba403d4d1114c0bb1060d15d1"
	I1025 09:15:36.924488  284632 cri.go:89] found id: "2ef3d4094386517bb13e629728d51979ce32350e4cc4fdc820576cb2101fd8b5"
	I1025 09:15:36.924492  284632 cri.go:89] found id: "8c0ca7560cc31a31d55fa3e6598cfaffb772455fa1a71284e0cc016b5d7ca083"
	I1025 09:15:36.924496  284632 cri.go:89] found id: "5f6ebdb3d286f37cd6ede568d0ef9b8b18e5bcd2de579823ff85eae51b26b151"
	I1025 09:15:36.924503  284632 cri.go:89] found id: "7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	I1025 09:15:36.924508  284632 cri.go:89] found id: "a5f2279abdd3d8573970804fa06c858ff73b788144c0c791ed73128c4381f6d0"
	I1025 09:15:36.924511  284632 cri.go:89] found id: ""
	I1025 09:15:36.924556  284632 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:36.938130  284632 retry.go:31] will retry after 248.206041ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:37.187491  284632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:37.204121  284632 pause.go:52] kubelet running: false
	I1025 09:15:37.204182  284632 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:37.396474  284632 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:37.396572  284632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:37.471613  284632 cri.go:89] found id: "3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f"
	I1025 09:15:37.471672  284632 cri.go:89] found id: "0553f0bb1ffb9292e667528ee940875c401cef5ffdc7d9d0b2a6254ea2f48bb4"
	I1025 09:15:37.471680  284632 cri.go:89] found id: "b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6"
	I1025 09:15:37.471685  284632 cri.go:89] found id: "7a79aee2c4047ff17a490493c6fabf5d9bf45c412c892472070caeb72cab191d"
	I1025 09:15:37.471690  284632 cri.go:89] found id: "c7f9b2e31210a0e8cec194cd09bb4249f8bdfccefdcdfc0247b7045f2826a78c"
	I1025 09:15:37.471694  284632 cri.go:89] found id: "c648a3db147adba437828b8bb877ee3ed46dad5ba403d4d1114c0bb1060d15d1"
	I1025 09:15:37.471698  284632 cri.go:89] found id: "2ef3d4094386517bb13e629728d51979ce32350e4cc4fdc820576cb2101fd8b5"
	I1025 09:15:37.471702  284632 cri.go:89] found id: "8c0ca7560cc31a31d55fa3e6598cfaffb772455fa1a71284e0cc016b5d7ca083"
	I1025 09:15:37.471707  284632 cri.go:89] found id: "5f6ebdb3d286f37cd6ede568d0ef9b8b18e5bcd2de579823ff85eae51b26b151"
	I1025 09:15:37.471726  284632 cri.go:89] found id: "7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	I1025 09:15:37.471735  284632 cri.go:89] found id: "a5f2279abdd3d8573970804fa06c858ff73b788144c0c791ed73128c4381f6d0"
	I1025 09:15:37.471739  284632 cri.go:89] found id: ""
	I1025 09:15:37.471790  284632 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:37.484312  284632 retry.go:31] will retry after 369.811319ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:37Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:37.854701  284632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:37.869902  284632 pause.go:52] kubelet running: false
	I1025 09:15:37.869963  284632 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:38.026831  284632 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:38.026902  284632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:38.094409  284632 cri.go:89] found id: "3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f"
	I1025 09:15:38.094438  284632 cri.go:89] found id: "0553f0bb1ffb9292e667528ee940875c401cef5ffdc7d9d0b2a6254ea2f48bb4"
	I1025 09:15:38.094444  284632 cri.go:89] found id: "b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6"
	I1025 09:15:38.094449  284632 cri.go:89] found id: "7a79aee2c4047ff17a490493c6fabf5d9bf45c412c892472070caeb72cab191d"
	I1025 09:15:38.094454  284632 cri.go:89] found id: "c7f9b2e31210a0e8cec194cd09bb4249f8bdfccefdcdfc0247b7045f2826a78c"
	I1025 09:15:38.094460  284632 cri.go:89] found id: "c648a3db147adba437828b8bb877ee3ed46dad5ba403d4d1114c0bb1060d15d1"
	I1025 09:15:38.094464  284632 cri.go:89] found id: "2ef3d4094386517bb13e629728d51979ce32350e4cc4fdc820576cb2101fd8b5"
	I1025 09:15:38.094469  284632 cri.go:89] found id: "8c0ca7560cc31a31d55fa3e6598cfaffb772455fa1a71284e0cc016b5d7ca083"
	I1025 09:15:38.094473  284632 cri.go:89] found id: "5f6ebdb3d286f37cd6ede568d0ef9b8b18e5bcd2de579823ff85eae51b26b151"
	I1025 09:15:38.094483  284632 cri.go:89] found id: "7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	I1025 09:15:38.094487  284632 cri.go:89] found id: "a5f2279abdd3d8573970804fa06c858ff73b788144c0c791ed73128c4381f6d0"
	I1025 09:15:38.094491  284632 cri.go:89] found id: ""
	I1025 09:15:38.094549  284632 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:38.108923  284632 out.go:203] 
	W1025 09:15:38.110011  284632 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:15:38.110029  284632 out.go:285] * 
	* 
	W1025 09:15:38.114051  284632 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:15:38.115496  284632 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-106968 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-106968
helpers_test.go:243: (dbg) docker inspect embed-certs-106968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2",
	        "Created": "2025-10-25T09:13:06.160714175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:37.87793428Z",
	            "FinishedAt": "2025-10-25T09:14:36.98629726Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/hosts",
	        "LogPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2-json.log",
	        "Name": "/embed-certs-106968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-106968:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-106968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2",
	                "LowerDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-106968",
	                "Source": "/var/lib/docker/volumes/embed-certs-106968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-106968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-106968",
	                "name.minikube.sigs.k8s.io": "embed-certs-106968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f2ec0ea2b867c30f6aa7e065db973cdf21aa8dfd947fb2e8acd3048b579e70d",
	            "SandboxKey": "/var/run/docker/netns/5f2ec0ea2b86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-106968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:76:e7:82:26:b7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d58a21465e1439a449774f24fb5c5d02c9ed0fbccfcab14073246dc3e313836",
	                    "EndpointID": "d05c169e307afc88d3f141bb015400e4762e8dd3c87e817e0632e7007fdc528a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-106968",
	                        "e1514b582330"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968: exit status 2 (359.292848ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-106968 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-106968 logs -n 25: (1.180812188s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-891466 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p newest-cni-036155 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ image   │ newest-cni-036155 image list --format=json                                                                                                                                                                                                    │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p newest-cni-036155 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p auto-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-687131                  │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p kindnet-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-687131               │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ image   │ embed-certs-106968 image list --format=json                                                                                                                                                                                                   │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p embed-certs-106968 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:16.020787  279928 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:16.021157  279928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:16.021171  279928 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:16.021178  279928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:16.021473  279928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:16.022216  279928 out.go:368] Setting JSON to false
	I1025 09:15:16.023688  279928 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3464,"bootTime":1761380252,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:16.023798  279928 start.go:141] virtualization: kvm guest
	I1025 09:15:16.026632  279928 out.go:179] * [kindnet-687131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:16.028561  279928 notify.go:220] Checking for updates...
	I1025 09:15:16.028593  279928 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:15:16.030119  279928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:16.031829  279928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:16.033381  279928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:15:16.034874  279928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:16.036503  279928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:16.038554  279928 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038660  279928 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038733  279928 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038820  279928 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:16.066342  279928 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:16.066508  279928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:16.134706  279928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:68 SystemTime:2025-10-25 09:15:16.122944363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:16.134814  279928 docker.go:318] overlay module found
	I1025 09:15:16.137572  279928 out.go:179] * Using the docker driver based on user configuration
	I1025 09:15:16.140435  279928 start.go:305] selected driver: docker
	I1025 09:15:16.140457  279928 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:16.140470  279928 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:16.141086  279928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:16.207410  279928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 09:15:16.195269689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:16.207685  279928 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:16.207951  279928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:15:16.210244  279928 out.go:179] * Using Docker driver with root privileges
	I1025 09:15:16.211682  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:16.211710  279928 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:15:16.211813  279928 start.go:349] cluster config:
	{Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:16.213496  279928 out.go:179] * Starting "kindnet-687131" primary control-plane node in "kindnet-687131" cluster
	I1025 09:15:16.214878  279928 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:15:16.216267  279928 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:16.217483  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:16.217519  279928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:16.217533  279928 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:15:16.217544  279928 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:16.217693  279928 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:16.217707  279928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:15:16.217850  279928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json ...
	I1025 09:15:16.217881  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json: {Name:mk59edad4f0461fbcf9ec630103ca3869ab6269c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:16.242933  279928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:15:16.242960  279928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:15:16.242982  279928 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:15:16.243012  279928 start.go:360] acquireMachinesLock for kindnet-687131: {Name:mk9e87ffb8b828e3d740e3d2456d3f613e75798f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:15:16.243126  279928 start.go:364] duration metric: took 91.55µs to acquireMachinesLock for "kindnet-687131"
	I1025 09:15:16.243170  279928 start.go:93] Provisioning new machine with config: &{Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:15:16.243276  279928 start.go:125] createHost starting for "" (driver="docker")
	W1025 09:15:14.166974  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:16.172374  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:15.890048  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:17.890391  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:15.786223  279556 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:15:15.786457  279556 start.go:159] libmachine.API.Create for "auto-687131" (driver="docker")
	I1025 09:15:15.786489  279556 client.go:168] LocalClient.Create starting
	I1025 09:15:15.786579  279556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:15:15.786623  279556 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:15.786675  279556 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:15.786756  279556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:15:15.786785  279556 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:15.786803  279556 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:15.787187  279556 cli_runner.go:164] Run: docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:15:15.806182  279556 cli_runner.go:211] docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:15:15.806242  279556 network_create.go:284] running [docker network inspect auto-687131] to gather additional debugging logs...
	I1025 09:15:15.806261  279556 cli_runner.go:164] Run: docker network inspect auto-687131
	W1025 09:15:15.827929  279556 cli_runner.go:211] docker network inspect auto-687131 returned with exit code 1
	I1025 09:15:15.827975  279556 network_create.go:287] error running [docker network inspect auto-687131]: docker network inspect auto-687131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-687131 not found
	I1025 09:15:15.827997  279556 network_create.go:289] output of [docker network inspect auto-687131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-687131 not found
	
	** /stderr **
	I1025 09:15:15.828184  279556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:15.847440  279556 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:15:15.848061  279556 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:15:15.848790  279556 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:15:15.849253  279556 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:15:15.850068  279556 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e752b0}
	I1025 09:15:15.850116  279556 network_create.go:124] attempt to create docker network auto-687131 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 09:15:15.850193  279556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-687131 auto-687131
	I1025 09:15:15.916274  279556 network_create.go:108] docker network auto-687131 192.168.85.0/24 created
	I1025 09:15:15.916314  279556 kic.go:121] calculated static IP "192.168.85.2" for the "auto-687131" container
	I1025 09:15:15.916418  279556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:15:15.937311  279556 cli_runner.go:164] Run: docker volume create auto-687131 --label name.minikube.sigs.k8s.io=auto-687131 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:15:15.958005  279556 oci.go:103] Successfully created a docker volume auto-687131
	I1025 09:15:15.958109  279556 cli_runner.go:164] Run: docker run --rm --name auto-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-687131 --entrypoint /usr/bin/test -v auto-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:15:16.396685  279556 oci.go:107] Successfully prepared a docker volume auto-687131
	I1025 09:15:16.396740  279556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:16.396765  279556 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:15:16.396833  279556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:15:19.141617  279556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (2.744742156s)
	I1025 09:15:19.141672  279556 kic.go:203] duration metric: took 2.74490357s to extract preloaded images to volume ...
	W1025 09:15:19.141768  279556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:15:19.141825  279556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:15:19.141868  279556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:15:19.210146  279556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-687131 --name auto-687131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-687131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-687131 --network auto-687131 --ip 192.168.85.2 --volume auto-687131:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:15:19.547183  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Running}}
	I1025 09:15:19.568747  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.588991  279556 cli_runner.go:164] Run: docker exec auto-687131 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:15:19.639905  279556 oci.go:144] the created container "auto-687131" has a running status.
	I1025 09:15:19.639945  279556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa...
	I1025 09:15:19.760291  279556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:15:19.795261  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.821632  279556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:15:19.821699  279556 kic_runner.go:114] Args: [docker exec --privileged auto-687131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:15:19.870801  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.898909  279556 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:19.899009  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:19.922667  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:19.923027  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:19.923059  279556 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:20.067753  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-687131
	
	I1025 09:15:20.067781  279556 ubuntu.go:182] provisioning hostname "auto-687131"
	I1025 09:15:20.067841  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:20.086111  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:20.086338  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:20.086354  279556 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-687131 && echo "auto-687131" | sudo tee /etc/hostname
	I1025 09:15:20.271814  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-687131
	
	I1025 09:15:20.271897  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:20.292274  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:20.292587  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:20.292623  279556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-687131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-687131/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-687131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:20.442537  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:20.442571  279556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:20.442604  279556 ubuntu.go:190] setting up certificates
	I1025 09:15:20.442619  279556 provision.go:84] configureAuth start
	I1025 09:15:20.442691  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:20.460617  279556 provision.go:143] copyHostCerts
	I1025 09:15:20.460717  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:20.460730  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:20.510975  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:20.511209  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:20.511225  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:20.511278  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:20.511407  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:20.511419  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:20.511456  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:20.511555  279556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.auto-687131 san=[127.0.0.1 192.168.85.2 auto-687131 localhost minikube]
	I1025 09:15:16.245622  279928 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:15:16.245926  279928 start.go:159] libmachine.API.Create for "kindnet-687131" (driver="docker")
	I1025 09:15:16.245971  279928 client.go:168] LocalClient.Create starting
	I1025 09:15:16.246054  279928 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:15:16.246095  279928 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:16.246115  279928 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:16.246201  279928 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:15:16.246246  279928 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:16.246267  279928 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:16.246894  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:15:16.270502  279928 cli_runner.go:211] docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:15:16.270577  279928 network_create.go:284] running [docker network inspect kindnet-687131] to gather additional debugging logs...
	I1025 09:15:16.270592  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131
	W1025 09:15:16.290826  279928 cli_runner.go:211] docker network inspect kindnet-687131 returned with exit code 1
	I1025 09:15:16.290865  279928 network_create.go:287] error running [docker network inspect kindnet-687131]: docker network inspect kindnet-687131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-687131 not found
	I1025 09:15:16.290881  279928 network_create.go:289] output of [docker network inspect kindnet-687131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-687131 not found
	
	** /stderr **
	I1025 09:15:16.290987  279928 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:16.314287  279928 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:15:16.315250  279928 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:15:16.316258  279928 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:15:16.316988  279928 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:15:16.317865  279928 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-427f290f6b13 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:07:d0:a1:54:23} reservation:<nil>}
	I1025 09:15:16.318520  279928 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5d58a21465e1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4e:78:a8:09:a3:02} reservation:<nil>}
	I1025 09:15:16.319390  279928 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe0500}
	I1025 09:15:16.319416  279928 network_create.go:124] attempt to create docker network kindnet-687131 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:15:16.319460  279928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-687131 kindnet-687131
	I1025 09:15:16.397907  279928 network_create.go:108] docker network kindnet-687131 192.168.103.0/24 created
	I1025 09:15:16.397939  279928 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-687131" container
	I1025 09:15:16.397993  279928 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:15:16.417914  279928 cli_runner.go:164] Run: docker volume create kindnet-687131 --label name.minikube.sigs.k8s.io=kindnet-687131 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:15:16.437974  279928 oci.go:103] Successfully created a docker volume kindnet-687131
	I1025 09:15:16.438054  279928 cli_runner.go:164] Run: docker run --rm --name kindnet-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --entrypoint /usr/bin/test -v kindnet-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:15:17.461263  279928 cli_runner.go:217] Completed: docker run --rm --name kindnet-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --entrypoint /usr/bin/test -v kindnet-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.023162971s)
	I1025 09:15:17.461305  279928 oci.go:107] Successfully prepared a docker volume kindnet-687131
	I1025 09:15:17.461333  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:17.461353  279928 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:15:17.461430  279928 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:15:18.301233  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:20.666718  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	I1025 09:15:22.166607  267761 pod_ready.go:94] pod "coredns-66bc5c9577-dx4j4" is "Ready"
	I1025 09:15:22.166687  267761 pod_ready.go:86] duration metric: took 33.505954367s for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.170010  267761 pod_ready.go:83] waiting for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.174911  267761 pod_ready.go:94] pod "etcd-embed-certs-106968" is "Ready"
	I1025 09:15:22.174944  267761 pod_ready.go:86] duration metric: took 4.899097ms for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.177358  267761 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.181786  267761 pod_ready.go:94] pod "kube-apiserver-embed-certs-106968" is "Ready"
	I1025 09:15:22.181822  267761 pod_ready.go:86] duration metric: took 4.436379ms for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.183829  267761 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.364905  267761 pod_ready.go:94] pod "kube-controller-manager-embed-certs-106968" is "Ready"
	I1025 09:15:22.364933  267761 pod_ready.go:86] duration metric: took 181.084937ms for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.565796  267761 pod_ready.go:83] waiting for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.964268  267761 pod_ready.go:94] pod "kube-proxy-sm8hw" is "Ready"
	I1025 09:15:22.964293  267761 pod_ready.go:86] duration metric: took 398.467936ms for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.164880  267761 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.565174  267761 pod_ready.go:94] pod "kube-scheduler-embed-certs-106968" is "Ready"
	I1025 09:15:23.565206  267761 pod_ready.go:86] duration metric: took 400.294371ms for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.565222  267761 pod_ready.go:40] duration metric: took 34.9096785s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:23.621826  267761 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:23.624241  267761 out.go:179] * Done! kubectl is now configured to use "embed-certs-106968" cluster and "default" namespace by default
	I1025 09:15:21.341448  279556 provision.go:177] copyRemoteCerts
	I1025 09:15:21.341532  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:21.341608  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:21.362919  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:21.473321  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:21.654106  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 09:15:21.717581  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:15:21.741804  279556 provision.go:87] duration metric: took 1.299167498s to configureAuth
	I1025 09:15:21.741856  279556 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:21.742057  279556 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:21.742325  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:21.768335  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:21.769187  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:21.769223  279556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:22.255810  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:22.255850  279556 machine.go:96] duration metric: took 2.356919433s to provisionDockerMachine
	I1025 09:15:22.255864  279556 client.go:171] duration metric: took 6.469363636s to LocalClient.Create
	I1025 09:15:22.255894  279556 start.go:167] duration metric: took 6.469435334s to libmachine.API.Create "auto-687131"
	I1025 09:15:22.255910  279556 start.go:293] postStartSetup for "auto-687131" (driver="docker")
	I1025 09:15:22.255923  279556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:22.255996  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:22.256050  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.277614  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.387947  279556 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:22.395824  279556 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:22.395865  279556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:22.395879  279556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:22.395950  279556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:22.396136  279556 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:22.396541  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:22.407550  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:22.434048  279556 start.go:296] duration metric: took 178.121274ms for postStartSetup
	I1025 09:15:22.434977  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:22.457420  279556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/config.json ...
	I1025 09:15:22.457771  279556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:22.457824  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.480826  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.584880  279556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:22.590391  279556 start.go:128] duration metric: took 6.806327034s to createHost
	I1025 09:15:22.590431  279556 start.go:83] releasing machines lock for "auto-687131", held for 6.80645362s
	I1025 09:15:22.590493  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:22.610539  279556 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:22.610583  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.610603  279556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:22.610695  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.630329  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.630621  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.798632  279556 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:22.806370  279556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:22.847984  279556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:22.853905  279556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:22.853979  279556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:22.881992  279556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:15:22.882017  279556 start.go:495] detecting cgroup driver to use...
	I1025 09:15:22.882050  279556 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:22.882096  279556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:22.902000  279556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:22.917189  279556 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:22.917246  279556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:22.935738  279556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:22.960242  279556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:23.066373  279556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:23.203040  279556 docker.go:234] disabling docker service ...
	I1025 09:15:23.203110  279556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:23.225691  279556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:23.242722  279556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:23.338881  279556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:23.436201  279556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:23.449397  279556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:23.465144  279556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:23.465208  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.476785  279556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:23.476857  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.486376  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.496079  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.507141  279556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:23.516073  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.526594  279556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.544236  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.554362  279556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:23.563498  279556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:23.572509  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:23.669764  279556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:23.790270  279556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:23.790374  279556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:23.794532  279556 start.go:563] Will wait 60s for crictl version
	I1025 09:15:23.794589  279556 ssh_runner.go:195] Run: which crictl
	I1025 09:15:23.798393  279556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:23.823069  279556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:23.823148  279556 ssh_runner.go:195] Run: crio --version
	I1025 09:15:23.852060  279556 ssh_runner.go:195] Run: crio --version
	I1025 09:15:23.884239  279556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 09:15:19.896862  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:22.390120  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:23.885891  279556 cli_runner.go:164] Run: docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:23.906293  279556 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:15:23.911133  279556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:23.925504  279556 kubeadm.go:883] updating cluster {Name:auto-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:15:23.925712  279556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:23.925784  279556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:23.966169  279556 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:23.966190  279556 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:15:23.966243  279556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:23.994585  279556 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:23.994604  279556 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:15:23.994611  279556 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:15:23.994737  279556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-687131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:15:23.994831  279556 ssh_runner.go:195] Run: crio config
	I1025 09:15:24.046767  279556 cni.go:84] Creating CNI manager for ""
	I1025 09:15:24.046790  279556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:24.046811  279556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:15:24.046837  279556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-687131 NodeName:auto-687131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:15:24.046988  279556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-687131"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:15:24.047063  279556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:15:24.055111  279556 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:15:24.055172  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:15:24.063035  279556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 09:15:24.076837  279556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:15:24.094395  279556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1025 09:15:24.107726  279556 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:15:24.112067  279556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:24.122709  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:24.208028  279556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:24.236216  279556 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131 for IP: 192.168.85.2
	I1025 09:15:24.236238  279556 certs.go:195] generating shared ca certs ...
	I1025 09:15:24.236256  279556 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.236434  279556 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:15:24.236488  279556 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:15:24.236501  279556 certs.go:257] generating profile certs ...
	I1025 09:15:24.236564  279556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key
	I1025 09:15:24.236581  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt with IP's: []
	I1025 09:15:24.928992  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt ...
	I1025 09:15:24.929020  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt: {Name:mk779bd9fdf8eaa5918f81c459f798815b970211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.929218  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key ...
	I1025 09:15:24.929242  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key: {Name:mk46972b19f1fd85299d3aff68dfc355ea581ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.929386  279556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded
	I1025 09:15:24.929408  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:15:25.370687  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded ...
	I1025 09:15:25.370717  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded: {Name:mk758bb25e73fe6bee588c76326f09382b8c326f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.370874  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded ...
	I1025 09:15:25.370888  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded: {Name:mk7ceb126fbb04a31aaba790cb04f339aa54e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.370958  279556 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt
	I1025 09:15:25.371030  279556 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key
	I1025 09:15:25.371087  279556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key
	I1025 09:15:25.371102  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt with IP's: []
	I1025 09:15:25.463911  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt ...
	I1025 09:15:25.463935  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt: {Name:mk4787cbad8c90eaac31b2526653c5fcc02d8be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.464075  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key ...
	I1025 09:15:25.464086  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key: {Name:mk2a6539101452dd3e491062dcc240c2c53ba421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.464280  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:15:25.464315  279556 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:15:25.464324  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:15:25.464345  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:15:25.464370  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:15:25.464393  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:15:25.464431  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:25.464974  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:15:25.483378  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:15:25.501823  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:15:25.520038  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:15:25.539492  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 09:15:25.558370  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:15:22.440164  279928 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.978689163s)
	I1025 09:15:22.440203  279928 kic.go:203] duration metric: took 4.978845546s to extract preloaded images to volume ...
	W1025 09:15:22.440286  279928 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:15:22.440329  279928 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:15:22.440367  279928 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:15:22.506269  279928 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-687131 --name kindnet-687131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-687131 --network kindnet-687131 --ip 192.168.103.2 --volume kindnet-687131:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:15:22.799105  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Running}}
	I1025 09:15:22.820206  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:22.842929  279928 cli_runner.go:164] Run: docker exec kindnet-687131 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:15:22.892617  279928 oci.go:144] the created container "kindnet-687131" has a running status.
	I1025 09:15:22.892659  279928 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa...
	I1025 09:15:23.014325  279928 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:15:23.049009  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:23.070457  279928 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:15:23.070500  279928 kic_runner.go:114] Args: [docker exec --privileged kindnet-687131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:15:23.134243  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:23.158093  279928 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:23.158226  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.181058  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.181403  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.181428  279928 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:23.331938  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-687131
	
	I1025 09:15:23.331970  279928 ubuntu.go:182] provisioning hostname "kindnet-687131"
	I1025 09:15:23.332035  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.353853  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.354132  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.354153  279928 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-687131 && echo "kindnet-687131" | sudo tee /etc/hostname
	I1025 09:15:23.515310  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-687131
	
	I1025 09:15:23.515394  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.537215  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.537527  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.537560  279928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-687131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-687131/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-687131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:23.688101  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:23.688132  279928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:23.688166  279928 ubuntu.go:190] setting up certificates
	I1025 09:15:23.688179  279928 provision.go:84] configureAuth start
	I1025 09:15:23.688244  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:23.709237  279928 provision.go:143] copyHostCerts
	I1025 09:15:23.709298  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:23.709318  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:23.709404  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:23.709548  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:23.709565  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:23.709612  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:23.709727  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:23.709739  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:23.709774  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:23.709864  279928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.kindnet-687131 san=[127.0.0.1 192.168.103.2 kindnet-687131 localhost minikube]
	I1025 09:15:23.878508  279928 provision.go:177] copyRemoteCerts
	I1025 09:15:23.878559  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:23.878599  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.900441  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.009301  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:15:24.031121  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:24.051157  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1025 09:15:24.069764  279928 provision.go:87] duration metric: took 381.568636ms to configureAuth
	I1025 09:15:24.069798  279928 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:24.069969  279928 config.go:182] Loaded profile config "kindnet-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:24.070073  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.091045  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:24.091297  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:24.091319  279928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:24.366841  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:24.366868  279928 machine.go:96] duration metric: took 1.208744926s to provisionDockerMachine
	I1025 09:15:24.366878  279928 client.go:171] duration metric: took 8.120898239s to LocalClient.Create
	I1025 09:15:24.366903  279928 start.go:167] duration metric: took 8.120973439s to libmachine.API.Create "kindnet-687131"
	I1025 09:15:24.366916  279928 start.go:293] postStartSetup for "kindnet-687131" (driver="docker")
	I1025 09:15:24.366927  279928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:24.366989  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:24.367022  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.386435  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.490100  279928 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:24.493952  279928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:24.493982  279928 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:24.493997  279928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:24.494064  279928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:24.494174  279928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:24.494310  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:24.502630  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:24.524399  279928 start.go:296] duration metric: took 157.46682ms for postStartSetup
	I1025 09:15:24.524816  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:24.543897  279928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json ...
	I1025 09:15:24.544201  279928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:24.544248  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.562392  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.660938  279928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:24.666060  279928 start.go:128] duration metric: took 8.422763522s to createHost
	I1025 09:15:24.666089  279928 start.go:83] releasing machines lock for "kindnet-687131", held for 8.422948298s
	I1025 09:15:24.666161  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:24.686558  279928 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:24.686619  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.686618  279928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:24.686694  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.707640  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.707737  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.805204  279928 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:24.861031  279928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:24.899252  279928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:24.904135  279928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:24.904213  279928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:24.931204  279928 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:15:24.931225  279928 start.go:495] detecting cgroup driver to use...
	I1025 09:15:24.931256  279928 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:24.931299  279928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:24.948666  279928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:24.962055  279928 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:24.962115  279928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:24.980169  279928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:24.998963  279928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:25.096394  279928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:25.188449  279928 docker.go:234] disabling docker service ...
	I1025 09:15:25.188539  279928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:25.207995  279928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:25.222036  279928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:25.319414  279928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:25.412233  279928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:25.425899  279928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:25.441635  279928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:25.441709  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.453116  279928 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:25.453188  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.462464  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.471732  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.480919  279928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:25.490188  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.499310  279928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.514357  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.523846  279928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:25.532211  279928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:25.540303  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:25.626699  279928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:25.739482  279928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:25.739551  279928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:25.743863  279928 start.go:563] Will wait 60s for crictl version
	I1025 09:15:25.743922  279928 ssh_runner.go:195] Run: which crictl
	I1025 09:15:25.747790  279928 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:25.774761  279928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:25.774855  279928 ssh_runner.go:195] Run: crio --version
	I1025 09:15:25.809624  279928 ssh_runner.go:195] Run: crio --version
	I1025 09:15:25.841924  279928 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:15:25.843191  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:25.860519  279928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:15:25.864742  279928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:25.875509  279928 kubeadm.go:883] updating cluster {Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:15:25.875665  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:25.875729  279928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:25.913484  279928 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:25.913504  279928 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:15:25.913547  279928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:25.943471  279928 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:25.943492  279928 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:15:25.943500  279928 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:15:25.943574  279928 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-687131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1025 09:15:25.943633  279928 ssh_runner.go:195] Run: crio config
	I1025 09:15:25.993112  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:25.993145  279928 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:15:25.993184  279928 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-687131 NodeName:kindnet-687131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:15:25.993331  279928 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-687131"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:15:25.993383  279928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:15:26.002245  279928 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:15:26.002313  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:15:26.010918  279928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1025 09:15:25.584752  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:15:25.603272  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:15:25.622186  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:15:25.643606  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:15:25.661670  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:15:25.680760  279556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:15:25.695381  279556 ssh_runner.go:195] Run: openssl version
	I1025 09:15:25.701872  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:15:25.711383  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.715855  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.715916  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.753328  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:15:25.762817  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:15:25.773811  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.778344  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.778413  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.821755  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:15:25.831598  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:15:25.840888  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.845139  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.845193  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.882755  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:15:25.894652  279556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:15:25.898800  279556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:15:25.898865  279556 kubeadm.go:400] StartCluster: {Name:auto-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:25.898959  279556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:15:25.899034  279556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:15:25.930732  279556 cri.go:89] found id: ""
	I1025 09:15:25.930809  279556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:15:25.940722  279556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:15:25.949522  279556 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:15:25.949590  279556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:15:25.958156  279556 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:15:25.958183  279556 kubeadm.go:157] found existing configuration files:
	
	I1025 09:15:25.958235  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:15:25.967172  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:15:25.967254  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:15:25.976067  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:15:25.984242  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:15:25.984302  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:15:25.993016  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:15:26.002372  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:15:26.002430  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:15:26.010747  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:15:26.018587  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:15:26.018650  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:15:26.026625  279556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:15:26.066622  279556 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:15:26.066753  279556 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:15:26.087508  279556 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:15:26.087610  279556 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:15:26.087697  279556 kubeadm.go:318] OS: Linux
	I1025 09:15:26.087754  279556 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:15:26.087834  279556 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:15:26.087912  279556 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:15:26.088003  279556 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:15:26.088089  279556 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:15:26.088182  279556 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:15:26.088238  279556 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:15:26.088292  279556 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:15:26.159998  279556 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:15:26.160173  279556 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:15:26.160349  279556 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:15:26.168799  279556 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:15:26.024244  279928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:15:26.039712  279928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 09:15:26.054486  279928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:15:26.058574  279928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:26.069835  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:26.162803  279928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:26.190619  279928 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131 for IP: 192.168.103.2
	I1025 09:15:26.190663  279928 certs.go:195] generating shared ca certs ...
	I1025 09:15:26.190687  279928 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.190849  279928 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:15:26.190912  279928 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:15:26.190926  279928 certs.go:257] generating profile certs ...
	I1025 09:15:26.190998  279928 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key
	I1025 09:15:26.191017  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt with IP's: []
	I1025 09:15:26.219280  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt ...
	I1025 09:15:26.219307  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt: {Name:mk42146df35f32426a420017cd45ab46d2df2c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.219512  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key ...
	I1025 09:15:26.219526  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key: {Name:mka29965ab108f0e622f83908536f26ef739d604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.219659  279928 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2
	I1025 09:15:26.219684  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:15:26.329319  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 ...
	I1025 09:15:26.329363  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2: {Name:mk046cb06650a4e0f6d7e42c28f3d48d22d4b0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.329540  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2 ...
	I1025 09:15:26.329554  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2: {Name:mk530378837f592628c77d98032c76a4244f4436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.329625  279928 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt
	I1025 09:15:26.329742  279928 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key
	I1025 09:15:26.329805  279928 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key
	I1025 09:15:26.329820  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt with IP's: []
	I1025 09:15:26.735246  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt ...
	I1025 09:15:26.735276  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt: {Name:mk782fc69db18d88753465cefca07ee61999cf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.735488  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key ...
	I1025 09:15:26.735505  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key: {Name:mkf8bb93af2e3d11ccf0ab894717b994adb063f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.735728  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:15:26.735765  279928 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:15:26.735772  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:15:26.735795  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:15:26.735827  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:15:26.735849  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:15:26.735888  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:26.736421  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:15:26.755820  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:15:26.774147  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:15:26.792920  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:15:26.811477  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:15:26.830199  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:15:26.848867  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:15:26.868102  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:15:26.887910  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:15:26.909492  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:15:26.927490  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:15:26.944979  279928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:15:26.958195  279928 ssh_runner.go:195] Run: openssl version
	I1025 09:15:26.965063  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:15:26.974977  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:26.979031  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:26.979097  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:27.018091  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:15:27.028873  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:15:27.038925  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.043775  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.043852  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.081032  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:15:27.090311  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:15:27.099410  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.103356  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.103409  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.140571  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:15:27.149701  279928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:15:27.153665  279928 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:15:27.153732  279928 kubeadm.go:400] StartCluster: {Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:27.153809  279928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:15:27.153884  279928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:15:27.183161  279928 cri.go:89] found id: ""
	I1025 09:15:27.183234  279928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:15:27.191544  279928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:15:27.200214  279928 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:15:27.200290  279928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:15:27.208454  279928 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:15:27.208475  279928 kubeadm.go:157] found existing configuration files:
	
	I1025 09:15:27.208526  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:15:27.217396  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:15:27.217456  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:15:27.225670  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:15:27.236151  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:15:27.236214  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:15:27.245161  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:15:27.254460  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:15:27.254531  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:15:27.264877  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:15:27.274289  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:15:27.274375  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:15:27.284912  279928 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:15:27.328789  279928 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:15:27.328867  279928 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:15:27.351178  279928 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:15:27.351294  279928 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:15:27.351391  279928 kubeadm.go:318] OS: Linux
	I1025 09:15:27.351484  279928 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:15:27.351562  279928 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:15:27.351632  279928 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:15:27.351718  279928 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:15:27.351793  279928 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:15:27.351868  279928 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:15:27.351932  279928 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:15:27.351988  279928 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:15:27.422485  279928 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:15:27.422668  279928 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:15:27.422808  279928 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:15:27.430489  279928 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 09:15:24.889176  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:26.889579  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:28.388946  268581 pod_ready.go:94] pod "coredns-66bc5c9577-72zpn" is "Ready"
	I1025 09:15:28.388977  268581 pod_ready.go:86] duration metric: took 37.505736505s for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.392090  268581 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.397093  268581 pod_ready.go:94] pod "etcd-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.397132  268581 pod_ready.go:86] duration metric: took 5.011857ms for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.399595  268581 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.403894  268581 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.403922  268581 pod_ready.go:86] duration metric: took 4.302014ms for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.406153  268581 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.587570  268581 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.587597  268581 pod_ready.go:86] duration metric: took 181.422256ms for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.787005  268581 pod_ready.go:83] waiting for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.187351  268581 pod_ready.go:94] pod "kube-proxy-rmqbr" is "Ready"
	I1025 09:15:29.187384  268581 pod_ready.go:86] duration metric: took 400.350279ms for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.387388  268581 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.787121  268581 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:29.787150  268581 pod_ready.go:86] duration metric: took 399.732519ms for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.787164  268581 pod_ready.go:40] duration metric: took 38.908438746s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:29.833272  268581 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:29.837751  268581 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-891466" cluster and "default" namespace by default
	I1025 09:15:26.172422  279556 out.go:252]   - Generating certificates and keys ...
	I1025 09:15:26.172535  279556 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:15:26.172634  279556 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:15:26.285628  279556 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:15:26.713013  279556 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:15:27.071494  279556 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:15:27.179216  279556 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:15:27.221118  279556 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:15:27.221288  279556 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-687131 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:15:27.928204  279556 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:15:27.928373  279556 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-687131 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:15:28.068848  279556 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:15:28.204926  279556 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:15:28.440284  279556 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:15:28.440376  279556 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:15:28.579490  279556 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:15:28.909219  279556 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:15:29.245788  279556 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:15:29.318242  279556 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:15:29.914745  279556 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:15:29.915521  279556 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:15:29.920405  279556 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:15:29.923766  279556 out.go:252]   - Booting up control plane ...
	I1025 09:15:29.923896  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:15:29.924007  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:15:29.924130  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:15:29.938834  279556 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:15:29.938992  279556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:15:29.947531  279556 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:15:29.947860  279556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:15:29.947903  279556 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:15:30.066710  279556 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:15:30.066882  279556 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:15:27.433789  279928 out.go:252]   - Generating certificates and keys ...
	I1025 09:15:27.433905  279928 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:15:27.434019  279928 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:15:27.635226  279928 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:15:28.010533  279928 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:15:28.223358  279928 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:15:28.339793  279928 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:15:28.504635  279928 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:15:28.504813  279928 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-687131 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:15:28.673200  279928 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:15:28.673381  279928 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-687131 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:15:28.779444  279928 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:15:28.943425  279928 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:15:29.037026  279928 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:15:29.037226  279928 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:15:29.100058  279928 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:15:29.360945  279928 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:15:29.761516  279928 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:15:30.697334  279928 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:15:30.927462  279928 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:15:30.928032  279928 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:15:30.933234  279928 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:15:30.936503  279928 out.go:252]   - Booting up control plane ...
	I1025 09:15:30.936633  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:15:30.936762  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:15:30.936842  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:15:30.949721  279928 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:15:30.949850  279928 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:15:30.956514  279928 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:15:30.956751  279928 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:15:30.956797  279928 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:15:31.067780  279556 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001155245s
	I1025 09:15:31.071443  279556 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:15:31.071574  279556 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:15:31.071722  279556 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:15:31.071865  279556 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:15:32.114667  279556 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.043109229s
	I1025 09:15:33.596604  279556 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.525171723s
	I1025 09:15:35.073741  279556 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00221941s
	I1025 09:15:35.087030  279556 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:15:35.099234  279556 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:15:35.109692  279556 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:15:35.109931  279556 kubeadm.go:318] [mark-control-plane] Marking the node auto-687131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:15:35.119543  279556 kubeadm.go:318] [bootstrap-token] Using token: ds09vj.7po14nmutnpjjt8b
	I1025 09:15:35.121198  279556 out.go:252]   - Configuring RBAC rules ...
	I1025 09:15:35.121342  279556 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:15:35.126177  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:15:35.134866  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:15:35.137736  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:15:35.140350  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:15:35.144165  279556 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:15:35.479861  279556 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:15:31.054706  279928 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:15:31.054857  279928 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:15:32.055728  279928 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001030932s
	I1025 09:15:32.059944  279928 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:15:32.060054  279928 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1025 09:15:32.060171  279928 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:15:32.060273  279928 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:15:33.205829  279928 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.145837798s
	I1025 09:15:33.879959  279928 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.819970792s
	I1025 09:15:35.561861  279928 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501854256s
	I1025 09:15:35.574015  279928 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:15:35.585708  279928 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:15:35.595437  279928 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:15:35.595789  279928 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-687131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:15:35.605853  279928 kubeadm.go:318] [bootstrap-token] Using token: a4kf7c.mn4eyqkotrnz0x3q
	I1025 09:15:35.607340  279928 out.go:252]   - Configuring RBAC rules ...
	I1025 09:15:35.607488  279928 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:15:35.611019  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:15:35.617043  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:15:35.619701  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:15:35.623283  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:15:35.625946  279928 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:15:35.967831  279928 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:15:35.901156  279556 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:15:36.480801  279556 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:15:36.481768  279556 kubeadm.go:318] 
	I1025 09:15:36.481872  279556 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:15:36.481883  279556 kubeadm.go:318] 
	I1025 09:15:36.481998  279556 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:15:36.482009  279556 kubeadm.go:318] 
	I1025 09:15:36.482046  279556 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:15:36.482134  279556 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:15:36.482232  279556 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:15:36.482255  279556 kubeadm.go:318] 
	I1025 09:15:36.482334  279556 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:15:36.482344  279556 kubeadm.go:318] 
	I1025 09:15:36.482421  279556 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:15:36.482432  279556 kubeadm.go:318] 
	I1025 09:15:36.482511  279556 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:15:36.482606  279556 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:15:36.482743  279556 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:15:36.482756  279556 kubeadm.go:318] 
	I1025 09:15:36.482883  279556 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:15:36.482995  279556 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:15:36.483005  279556 kubeadm.go:318] 
	I1025 09:15:36.483113  279556 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ds09vj.7po14nmutnpjjt8b \
	I1025 09:15:36.483287  279556 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:15:36.483321  279556 kubeadm.go:318] 	--control-plane 
	I1025 09:15:36.483329  279556 kubeadm.go:318] 
	I1025 09:15:36.483475  279556 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:15:36.483490  279556 kubeadm.go:318] 
	I1025 09:15:36.483608  279556 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ds09vj.7po14nmutnpjjt8b \
	I1025 09:15:36.483813  279556 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:15:36.486803  279556 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:15:36.486932  279556 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:15:36.486981  279556 cni.go:84] Creating CNI manager for ""
	I1025 09:15:36.486999  279556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:36.488810  279556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:15:36.386755  279928 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:15:36.968068  279928 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:15:36.969126  279928 kubeadm.go:318] 
	I1025 09:15:36.969223  279928 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:15:36.969233  279928 kubeadm.go:318] 
	I1025 09:15:36.969328  279928 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:15:36.969337  279928 kubeadm.go:318] 
	I1025 09:15:36.969387  279928 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:15:36.969446  279928 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:15:36.969488  279928 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:15:36.969504  279928 kubeadm.go:318] 
	I1025 09:15:36.969598  279928 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:15:36.969608  279928 kubeadm.go:318] 
	I1025 09:15:36.969716  279928 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:15:36.969725  279928 kubeadm.go:318] 
	I1025 09:15:36.969769  279928 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:15:36.969873  279928 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:15:36.969975  279928 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:15:36.969984  279928 kubeadm.go:318] 
	I1025 09:15:36.970083  279928 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:15:36.970215  279928 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:15:36.970235  279928 kubeadm.go:318] 
	I1025 09:15:36.970345  279928 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a4kf7c.mn4eyqkotrnz0x3q \
	I1025 09:15:36.970489  279928 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:15:36.970537  279928 kubeadm.go:318] 	--control-plane 
	I1025 09:15:36.970556  279928 kubeadm.go:318] 
	I1025 09:15:36.970702  279928 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:15:36.970713  279928 kubeadm.go:318] 
	I1025 09:15:36.970813  279928 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a4kf7c.mn4eyqkotrnz0x3q \
	I1025 09:15:36.970967  279928 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:15:36.973483  279928 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:15:36.973617  279928 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:15:36.973668  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:36.975438  279928 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 25 09:15:05 embed-certs-106968 crio[562]: time="2025-10-25T09:15:05.225957347Z" level=info msg="Started container" PID=1739 containerID=eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper id=e2ceec34-650c-4730-a700-299a33fa785d name=/runtime.v1.RuntimeService/StartContainer sandboxID=97a81c4bc75b9153cc1f1f33db156a79a2f2c20aeea69cb4bc89abc77f69d0ad
	Oct 25 09:15:05 embed-certs-106968 crio[562]: time="2025-10-25T09:15:05.342012125Z" level=info msg="Removing container: aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b" id=f6c56f28-cbf4-4ff6-b93d-28ddc8223f2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:05 embed-certs-106968 crio[562]: time="2025-10-25T09:15:05.35361242Z" level=info msg="Removed container aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=f6c56f28-cbf4-4ff6-b93d-28ddc8223f2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.38275117Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f70af73-e3af-4dfc-a388-adc8efdfb54d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.383956892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9c49a2f9-3229-43dd-8699-04d6b16d9b2b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.385628697Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=54ea6799-a21b-4110-9fec-feb8a15ee4f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.385963826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391165846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391384143Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/03ce3abd5bf1346b7def3ff04c725957d5f3356ac21491d0bb40519736dc65bd/merged/etc/passwd: no such file or directory"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391544377Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/03ce3abd5bf1346b7def3ff04c725957d5f3356ac21491d0bb40519736dc65bd/merged/etc/group: no such file or directory"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391971885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.422018466Z" level=info msg="Created container 3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f: kube-system/storage-provisioner/storage-provisioner" id=54ea6799-a21b-4110-9fec-feb8a15ee4f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.422805261Z" level=info msg="Starting container: 3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f" id=741c03ee-2e0b-4d11-a7fe-668f3af68c2b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.425158986Z" level=info msg="Started container" PID=1753 containerID=3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f description=kube-system/storage-provisioner/storage-provisioner id=741c03ee-2e0b-4d11-a7fe-668f3af68c2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf7ee0639585a932c033b8fa6851607e075486e86ea44fc0b3df8f57a2af47a6
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.229155243Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7e39ef24-9310-4621-82d6-aeab79099573 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.230259709Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67b06e2b-2cee-49f5-975e-481cb2089f40 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.231335922Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=cdf2b6dc-6e2d-4b83-a92f-e7724888343f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.231478731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.238028855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.238670117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.272350201Z" level=info msg="Created container 7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=cdf2b6dc-6e2d-4b83-a92f-e7724888343f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.273173655Z" level=info msg="Starting container: 7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97" id=e74ccc8c-cfed-4280-929a-3bb0ad194bf9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.275560987Z" level=info msg="Started container" PID=1787 containerID=7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper id=e74ccc8c-cfed-4280-929a-3bb0ad194bf9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97a81c4bc75b9153cc1f1f33db156a79a2f2c20aeea69cb4bc89abc77f69d0ad
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.409221007Z" level=info msg="Removing container: eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b" id=9b34fa94-e914-4b3b-8c93-3a0e0f0925a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.420334886Z" level=info msg="Removed container eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=9b34fa94-e914-4b3b-8c93-3a0e0f0925a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7ed2d31508da6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   97a81c4bc75b9       dashboard-metrics-scraper-6ffb444bf9-h7z7c   kubernetes-dashboard
	3fe0a355171dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   bf7ee0639585a       storage-provisioner                          kube-system
	a5f2279abdd3d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   5fb798926aa0e       kubernetes-dashboard-855c9754f9-bffzw        kubernetes-dashboard
	0553f0bb1ffb9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   13ba78e35c7fe       coredns-66bc5c9577-dx4j4                     kube-system
	b9eea2497cea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   bf7ee0639585a       storage-provisioner                          kube-system
	7a79aee2c4047       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   069799b8e4f9a       kindnet-cf69x                                kube-system
	771f6d67f00e1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   742f47fb36c62       busybox                                      default
	c7f9b2e31210a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   3fbbb616861e9       kube-proxy-sm8hw                             kube-system
	c648a3db147ad       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   7e2e9a60890f6       kube-apiserver-embed-certs-106968            kube-system
	2ef3d40943865       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   84fd10fbabe9d       kube-scheduler-embed-certs-106968            kube-system
	8c0ca7560cc31       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   ad8eed87c64a6       kube-controller-manager-embed-certs-106968   kube-system
	5f6ebdb3d286f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   007119faf23cb       etcd-embed-certs-106968                      kube-system
	
	
	==> coredns [0553f0bb1ffb9292e667528ee940875c401cef5ffdc7d9d0b2a6254ea2f48bb4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59000 - 9984 "HINFO IN 4838945748492174529.2678795752666801554. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059621298s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-106968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-106968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=embed-certs-106968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-106968
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:15:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:14:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-106968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a272e628-6722-4504-b4e0-39037ebf73c9
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-dx4j4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m12s
	  kube-system                 etcd-embed-certs-106968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m18s
	  kube-system                 kindnet-cf69x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m12s
	  kube-system                 kube-apiserver-embed-certs-106968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-embed-certs-106968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-proxy-sm8hw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-embed-certs-106968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h7z7c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bffzw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m11s              kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m18s              kubelet          Node embed-certs-106968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s              kubelet          Node embed-certs-106968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s              kubelet          Node embed-certs-106968 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m18s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m14s              node-controller  Node embed-certs-106968 event: Registered Node embed-certs-106968 in Controller
	  Normal  NodeReady                91s                kubelet          Node embed-certs-106968 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-106968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-106968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-106968 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-106968 event: Registered Node embed-certs-106968 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [5f6ebdb3d286f37cd6ede568d0ef9b8b18e5bcd2de579823ff85eae51b26b151] <==
	{"level":"warn","ts":"2025-10-25T09:14:46.506991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.520878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.527456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.533990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.540890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.547001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.553178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.560110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.567053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.573446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.580468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.587617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.594403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.607375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.614209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.621144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.682476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:18.265974Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.169595ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:15:18.266115Z","caller":"traceutil/trace.go:172","msg":"trace[1986785789] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:621; }","duration":"117.337435ms","start":"2025-10-25T09:15:18.148755Z","end":"2025-10-25T09:15:18.266093Z","steps":["trace[1986785789] 'range keys from in-memory index tree'  (duration: 117.118505ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:15:18.266549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.833807ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765720510285700 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-g5sidkm4nzelivkullms6t66ti\" mod_revision:615 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-g5sidkm4nzelivkullms6t66ti\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-g5sidkm4nzelivkullms6t66ti\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:15:18.266632Z","caller":"traceutil/trace.go:172","msg":"trace[2116175180] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"103.244941ms","start":"2025-10-25T09:15:18.163375Z","end":"2025-10-25T09:15:18.266620Z","steps":["trace[2116175180] 'read index received'  (duration: 40.781µs)","trace[2116175180] 'applied index is now lower than readState.Index'  (duration: 103.203415ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:15:18.266788Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.413297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-dx4j4\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-10-25T09:15:18.266776Z","caller":"traceutil/trace.go:172","msg":"trace[333997458] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"205.54282ms","start":"2025-10-25T09:15:18.061216Z","end":"2025-10-25T09:15:18.266759Z","steps":["trace[333997458] 'process raft request'  (duration: 30.910019ms)","trace[333997458] 'compare'  (duration: 173.679139ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:15:18.266849Z","caller":"traceutil/trace.go:172","msg":"trace[230986260] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-dx4j4; range_end:; response_count:1; response_revision:622; }","duration":"103.479548ms","start":"2025-10-25T09:15:18.163361Z","end":"2025-10-25T09:15:18.266841Z","steps":["trace[230986260] 'agreement among raft nodes before linearized reading'  (duration: 103.316847ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:15:20.521515Z","caller":"traceutil/trace.go:172","msg":"trace[457088610] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"126.179071ms","start":"2025-10-25T09:15:20.395315Z","end":"2025-10-25T09:15:20.521494Z","steps":["trace[457088610] 'process raft request'  (duration: 126.025258ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:15:39 up 58 min,  0 user,  load average: 4.81, 3.47, 2.35
	Linux embed-certs-106968 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7a79aee2c4047ff17a490493c6fabf5d9bf45c412c892472070caeb72cab191d] <==
	I1025 09:14:48.779896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:14:48.780204       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:14:48.780371       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:14:48.780389       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:14:48.780416       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:14:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:14:48.982606       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:14:48.982634       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:14:48.982673       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:14:48.982786       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:14:49.575075       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:14:49.575115       1 metrics.go:72] Registering metrics
	I1025 09:14:49.575219       1 controller.go:711] "Syncing nftables rules"
	I1025 09:14:58.982762       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:14:58.982855       1 main.go:301] handling current node
	I1025 09:15:08.984746       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:08.984797       1 main.go:301] handling current node
	I1025 09:15:18.982393       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:18.982451       1 main.go:301] handling current node
	I1025 09:15:28.983136       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:28.983194       1 main.go:301] handling current node
	I1025 09:15:38.985733       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:38.985765       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c648a3db147adba437828b8bb877ee3ed46dad5ba403d4d1114c0bb1060d15d1] <==
	I1025 09:14:47.401429       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:14:47.391093       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:14:47.391026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1025 09:14:47.414675       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:14:47.419221       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:14:47.430013       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:14:47.430260       1 policy_source.go:240] refreshing policies
	I1025 09:14:47.447069       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:14:47.476110       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:14:47.485501       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:14:47.487883       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:14:47.488029       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:14:47.487984       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:14:47.498268       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:14:47.859740       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:14:47.893771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:14:47.917663       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:14:47.929008       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:14:47.936544       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:14:47.974013       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.241.171"}
	I1025 09:14:47.987901       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.136.50"}
	I1025 09:14:48.291818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:14:50.804158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:14:51.203652       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:14:51.253993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8c0ca7560cc31a31d55fa3e6598cfaffb772455fa1a71284e0cc016b5d7ca083] <==
	I1025 09:14:50.750666       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:14:50.750759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:14:50.750767       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:14:50.750773       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:14:50.750781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:14:50.753016       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:14:50.754014       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:14:50.754023       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:14:50.755182       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:14:50.757505       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:14:50.758977       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:50.759073       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:14:50.760264       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:14:50.762515       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:14:50.762611       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:14:50.762703       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-106968"
	I1025 09:14:50.762767       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:14:50.764857       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:14:50.767102       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:14:50.768274       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:14:50.769416       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:14:50.771701       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:14:50.772897       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:14:50.777169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:14:50.784468       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c7f9b2e31210a0e8cec194cd09bb4249f8bdfccefdcdfc0247b7045f2826a78c] <==
	I1025 09:14:48.647962       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:14:48.712859       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:14:48.814792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:14:48.814842       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 09:14:48.814945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:14:48.835486       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:14:48.835544       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:14:48.840958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:14:48.841347       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:14:48.841369       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:48.842789       1 config.go:200] "Starting service config controller"
	I1025 09:14:48.842823       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:14:48.842868       1 config.go:309] "Starting node config controller"
	I1025 09:14:48.842879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:14:48.842988       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:14:48.843005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:14:48.843036       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:14:48.843045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:14:48.943423       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:14:48.943447       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:14:48.943492       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:14:48.943549       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2ef3d4094386517bb13e629728d51979ce32350e4cc4fdc820576cb2101fd8b5] <==
	I1025 09:14:45.603158       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:14:47.358269       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:14:47.358306       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:14:47.358334       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:14:47.358345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:14:47.390880       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:14:47.390914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:47.400081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:14:47.400244       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:47.400260       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:47.400282       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:14:47.500971       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:14:55 embed-certs-106968 kubelet[717]: I1025 09:14:55.293192     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:14:55 embed-certs-106968 kubelet[717]: E1025 09:14:55.293400     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:14:55 embed-certs-106968 kubelet[717]: I1025 09:14:55.293775     717 scope.go:117] "RemoveContainer" containerID="018de8aa7e9d4f0baf21f752e1e259f5298689ed1a4e60f4cc8e058d651de890"
	Oct 25 09:14:56 embed-certs-106968 kubelet[717]: I1025 09:14:56.299330     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:14:56 embed-certs-106968 kubelet[717]: E1025 09:14:56.299534     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:14:57 embed-certs-106968 kubelet[717]: I1025 09:14:57.302016     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:14:57 embed-certs-106968 kubelet[717]: E1025 09:14:57.302191     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:14:58 embed-certs-106968 kubelet[717]: I1025 09:14:58.325204     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bffzw" podStartSLOduration=0.972322642 podStartE2EDuration="7.325175402s" podCreationTimestamp="2025-10-25 09:14:51 +0000 UTC" firstStartedPulling="2025-10-25 09:14:51.701816148 +0000 UTC m=+7.583621174" lastFinishedPulling="2025-10-25 09:14:58.054668903 +0000 UTC m=+13.936473934" observedRunningTime="2025-10-25 09:14:58.324612875 +0000 UTC m=+14.206417919" watchObservedRunningTime="2025-10-25 09:14:58.325175402 +0000 UTC m=+14.206980445"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: I1025 09:15:05.170109     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: I1025 09:15:05.339872     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: I1025 09:15:05.340155     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: E1025 09:15:05.340412     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:15 embed-certs-106968 kubelet[717]: I1025 09:15:15.171018     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:15 embed-certs-106968 kubelet[717]: E1025 09:15:15.171268     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:19 embed-certs-106968 kubelet[717]: I1025 09:15:19.382247     717 scope.go:117] "RemoveContainer" containerID="b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: I1025 09:15:27.228580     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: I1025 09:15:27.407913     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: I1025 09:15:27.408213     717 scope.go:117] "RemoveContainer" containerID="7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: E1025 09:15:27.408455     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:35 embed-certs-106968 kubelet[717]: I1025 09:15:35.170706     717 scope.go:117] "RemoveContainer" containerID="7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	Oct 25 09:15:35 embed-certs-106968 kubelet[717]: E1025 09:15:35.170935     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: kubelet.service: Consumed 1.824s CPU time.
	
	
	==> kubernetes-dashboard [a5f2279abdd3d8573970804fa06c858ff73b788144c0c791ed73128c4381f6d0] <==
	2025/10/25 09:14:58 Starting overwatch
	2025/10/25 09:14:58 Using namespace: kubernetes-dashboard
	2025/10/25 09:14:58 Using in-cluster config to connect to apiserver
	2025/10/25 09:14:58 Using secret token for csrf signing
	2025/10/25 09:14:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:14:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:14:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:14:58 Generating JWE encryption key
	2025/10/25 09:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:14:58 Initializing JWE encryption key from synchronized object
	2025/10/25 09:14:58 Creating in-cluster Sidecar client
	2025/10/25 09:14:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:14:58 Serving insecurely on HTTP port: 9090
	2025/10/25 09:15:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f] <==
	I1025 09:15:19.440603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:15:19.450335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:15:19.450385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:15:19.453435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:22.910039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:27.170885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:30.769196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:33.822441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.845215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.851961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:36.852190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:15:36.852259       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e170d88-5532-46a5-99b3-fc8a977a4e4b", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-106968_56da4eb5-92f6-4f7a-a4f4-75ada9c31b6b became leader
	I1025 09:15:36.852453       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-106968_56da4eb5-92f6-4f7a-a4f4-75ada9c31b6b!
	W1025 09:15:36.855879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.861116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:36.953535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-106968_56da4eb5-92f6-4f7a-a4f4-75ada9c31b6b!
	W1025 09:15:38.866138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:38.871058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6] <==
	I1025 09:14:48.609001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:15:18.615091       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106968 -n embed-certs-106968
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106968 -n embed-certs-106968: exit status 2 (358.290727ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-106968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-106968
helpers_test.go:243: (dbg) docker inspect embed-certs-106968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2",
	        "Created": "2025-10-25T09:13:06.160714175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:37.87793428Z",
	            "FinishedAt": "2025-10-25T09:14:36.98629726Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/hosts",
	        "LogPath": "/var/lib/docker/containers/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2/e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2-json.log",
	        "Name": "/embed-certs-106968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-106968:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-106968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1514b5823306c12d3a6979f463b5d556fab676c1d18a766a5ad5f1e46bdacf2",
	                "LowerDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c80001a17db450c0243fbfbebb80f6347ada23fd185cf5989c29e7838242688/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-106968",
	                "Source": "/var/lib/docker/volumes/embed-certs-106968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-106968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-106968",
	                "name.minikube.sigs.k8s.io": "embed-certs-106968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f2ec0ea2b867c30f6aa7e065db973cdf21aa8dfd947fb2e8acd3048b579e70d",
	            "SandboxKey": "/var/run/docker/netns/5f2ec0ea2b86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-106968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:76:e7:82:26:b7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d58a21465e1439a449774f24fb5c5d02c9ed0fbccfcab14073246dc3e313836",
	                    "EndpointID": "d05c169e307afc88d3f141bb015400e4762e8dd3c87e817e0632e7007fdc528a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-106968",
	                        "e1514b582330"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968: exit status 2 (376.547855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-106968 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-106968 logs -n 25: (1.318237794s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-016092                                                                                                                                                                                                                          │ no-preload-016092            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-891466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-891466 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p newest-cni-036155 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ image   │ newest-cni-036155 image list --format=json                                                                                                                                                                                                    │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p newest-cni-036155 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p auto-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-687131                  │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p kindnet-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-687131               │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ image   │ embed-certs-106968 image list --format=json                                                                                                                                                                                                   │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p embed-certs-106968 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:16.020787  279928 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:16.021157  279928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:16.021171  279928 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:16.021178  279928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:16.021473  279928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:16.022216  279928 out.go:368] Setting JSON to false
	I1025 09:15:16.023688  279928 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3464,"bootTime":1761380252,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:16.023798  279928 start.go:141] virtualization: kvm guest
	I1025 09:15:16.026632  279928 out.go:179] * [kindnet-687131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:16.028561  279928 notify.go:220] Checking for updates...
	I1025 09:15:16.028593  279928 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:15:16.030119  279928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:16.031829  279928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:16.033381  279928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:15:16.034874  279928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:16.036503  279928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:16.038554  279928 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038660  279928 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038733  279928 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038820  279928 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:16.066342  279928 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:16.066508  279928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:16.134706  279928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:68 SystemTime:2025-10-25 09:15:16.122944363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:16.134814  279928 docker.go:318] overlay module found
	I1025 09:15:16.137572  279928 out.go:179] * Using the docker driver based on user configuration
	I1025 09:15:16.140435  279928 start.go:305] selected driver: docker
	I1025 09:15:16.140457  279928 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:16.140470  279928 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:16.141086  279928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:16.207410  279928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 09:15:16.195269689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:16.207685  279928 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:16.207951  279928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:15:16.210244  279928 out.go:179] * Using Docker driver with root privileges
	I1025 09:15:16.211682  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:16.211710  279928 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:15:16.211813  279928 start.go:349] cluster config:
	{Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:16.213496  279928 out.go:179] * Starting "kindnet-687131" primary control-plane node in "kindnet-687131" cluster
	I1025 09:15:16.214878  279928 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:15:16.216267  279928 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:16.217483  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:16.217519  279928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:16.217533  279928 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:15:16.217544  279928 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:16.217693  279928 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:16.217707  279928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:15:16.217850  279928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json ...
	I1025 09:15:16.217881  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json: {Name:mk59edad4f0461fbcf9ec630103ca3869ab6269c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:16.242933  279928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:15:16.242960  279928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:15:16.242982  279928 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:15:16.243012  279928 start.go:360] acquireMachinesLock for kindnet-687131: {Name:mk9e87ffb8b828e3d740e3d2456d3f613e75798f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:15:16.243126  279928 start.go:364] duration metric: took 91.55µs to acquireMachinesLock for "kindnet-687131"
	I1025 09:15:16.243170  279928 start.go:93] Provisioning new machine with config: &{Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:15:16.243276  279928 start.go:125] createHost starting for "" (driver="docker")
	W1025 09:15:14.166974  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:16.172374  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:15.890048  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:17.890391  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:15.786223  279556 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:15:15.786457  279556 start.go:159] libmachine.API.Create for "auto-687131" (driver="docker")
	I1025 09:15:15.786489  279556 client.go:168] LocalClient.Create starting
	I1025 09:15:15.786579  279556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:15:15.786623  279556 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:15.786675  279556 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:15.786756  279556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:15:15.786785  279556 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:15.786803  279556 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:15.787187  279556 cli_runner.go:164] Run: docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:15:15.806182  279556 cli_runner.go:211] docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:15:15.806242  279556 network_create.go:284] running [docker network inspect auto-687131] to gather additional debugging logs...
	I1025 09:15:15.806261  279556 cli_runner.go:164] Run: docker network inspect auto-687131
	W1025 09:15:15.827929  279556 cli_runner.go:211] docker network inspect auto-687131 returned with exit code 1
	I1025 09:15:15.827975  279556 network_create.go:287] error running [docker network inspect auto-687131]: docker network inspect auto-687131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-687131 not found
	I1025 09:15:15.827997  279556 network_create.go:289] output of [docker network inspect auto-687131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-687131 not found
	
	** /stderr **
	I1025 09:15:15.828184  279556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:15.847440  279556 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:15:15.848061  279556 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:15:15.848790  279556 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:15:15.849253  279556 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:15:15.850068  279556 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e752b0}
	I1025 09:15:15.850116  279556 network_create.go:124] attempt to create docker network auto-687131 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 09:15:15.850193  279556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-687131 auto-687131
	I1025 09:15:15.916274  279556 network_create.go:108] docker network auto-687131 192.168.85.0/24 created
	I1025 09:15:15.916314  279556 kic.go:121] calculated static IP "192.168.85.2" for the "auto-687131" container
	I1025 09:15:15.916418  279556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:15:15.937311  279556 cli_runner.go:164] Run: docker volume create auto-687131 --label name.minikube.sigs.k8s.io=auto-687131 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:15:15.958005  279556 oci.go:103] Successfully created a docker volume auto-687131
	I1025 09:15:15.958109  279556 cli_runner.go:164] Run: docker run --rm --name auto-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-687131 --entrypoint /usr/bin/test -v auto-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:15:16.396685  279556 oci.go:107] Successfully prepared a docker volume auto-687131
	I1025 09:15:16.396740  279556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:16.396765  279556 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:15:16.396833  279556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:15:19.141617  279556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (2.744742156s)
	I1025 09:15:19.141672  279556 kic.go:203] duration metric: took 2.74490357s to extract preloaded images to volume ...
	W1025 09:15:19.141768  279556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:15:19.141825  279556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:15:19.141868  279556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:15:19.210146  279556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-687131 --name auto-687131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-687131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-687131 --network auto-687131 --ip 192.168.85.2 --volume auto-687131:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:15:19.547183  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Running}}
	I1025 09:15:19.568747  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.588991  279556 cli_runner.go:164] Run: docker exec auto-687131 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:15:19.639905  279556 oci.go:144] the created container "auto-687131" has a running status.
	I1025 09:15:19.639945  279556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa...
	I1025 09:15:19.760291  279556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:15:19.795261  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.821632  279556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:15:19.821699  279556 kic_runner.go:114] Args: [docker exec --privileged auto-687131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:15:19.870801  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.898909  279556 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:19.899009  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:19.922667  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:19.923027  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:19.923059  279556 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:20.067753  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-687131
	
	I1025 09:15:20.067781  279556 ubuntu.go:182] provisioning hostname "auto-687131"
	I1025 09:15:20.067841  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:20.086111  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:20.086338  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:20.086354  279556 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-687131 && echo "auto-687131" | sudo tee /etc/hostname
	I1025 09:15:20.271814  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-687131
	
	I1025 09:15:20.271897  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:20.292274  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:20.292587  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:20.292623  279556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-687131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-687131/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-687131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:20.442537  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:20.442571  279556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:20.442604  279556 ubuntu.go:190] setting up certificates
	I1025 09:15:20.442619  279556 provision.go:84] configureAuth start
	I1025 09:15:20.442691  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:20.460617  279556 provision.go:143] copyHostCerts
	I1025 09:15:20.460717  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:20.460730  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:20.510975  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:20.511209  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:20.511225  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:20.511278  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:20.511407  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:20.511419  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:20.511456  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:20.511555  279556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.auto-687131 san=[127.0.0.1 192.168.85.2 auto-687131 localhost minikube]
	I1025 09:15:16.245622  279928 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:15:16.245926  279928 start.go:159] libmachine.API.Create for "kindnet-687131" (driver="docker")
	I1025 09:15:16.245971  279928 client.go:168] LocalClient.Create starting
	I1025 09:15:16.246054  279928 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:15:16.246095  279928 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:16.246115  279928 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:16.246201  279928 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:15:16.246246  279928 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:16.246267  279928 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:16.246894  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:15:16.270502  279928 cli_runner.go:211] docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:15:16.270577  279928 network_create.go:284] running [docker network inspect kindnet-687131] to gather additional debugging logs...
	I1025 09:15:16.270592  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131
	W1025 09:15:16.290826  279928 cli_runner.go:211] docker network inspect kindnet-687131 returned with exit code 1
	I1025 09:15:16.290865  279928 network_create.go:287] error running [docker network inspect kindnet-687131]: docker network inspect kindnet-687131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-687131 not found
	I1025 09:15:16.290881  279928 network_create.go:289] output of [docker network inspect kindnet-687131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-687131 not found
	
	** /stderr **
	I1025 09:15:16.290987  279928 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:16.314287  279928 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:15:16.315250  279928 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:15:16.316258  279928 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:15:16.316988  279928 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:15:16.317865  279928 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-427f290f6b13 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:07:d0:a1:54:23} reservation:<nil>}
	I1025 09:15:16.318520  279928 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5d58a21465e1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4e:78:a8:09:a3:02} reservation:<nil>}
	I1025 09:15:16.319390  279928 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe0500}
	I1025 09:15:16.319416  279928 network_create.go:124] attempt to create docker network kindnet-687131 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:15:16.319460  279928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-687131 kindnet-687131
	I1025 09:15:16.397907  279928 network_create.go:108] docker network kindnet-687131 192.168.103.0/24 created
	I1025 09:15:16.397939  279928 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-687131" container
	I1025 09:15:16.397993  279928 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:15:16.417914  279928 cli_runner.go:164] Run: docker volume create kindnet-687131 --label name.minikube.sigs.k8s.io=kindnet-687131 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:15:16.437974  279928 oci.go:103] Successfully created a docker volume kindnet-687131
	I1025 09:15:16.438054  279928 cli_runner.go:164] Run: docker run --rm --name kindnet-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --entrypoint /usr/bin/test -v kindnet-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:15:17.461263  279928 cli_runner.go:217] Completed: docker run --rm --name kindnet-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --entrypoint /usr/bin/test -v kindnet-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.023162971s)
	I1025 09:15:17.461305  279928 oci.go:107] Successfully prepared a docker volume kindnet-687131
	I1025 09:15:17.461333  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:17.461353  279928 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:15:17.461430  279928 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:15:18.301233  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:20.666718  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	I1025 09:15:22.166607  267761 pod_ready.go:94] pod "coredns-66bc5c9577-dx4j4" is "Ready"
	I1025 09:15:22.166687  267761 pod_ready.go:86] duration metric: took 33.505954367s for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.170010  267761 pod_ready.go:83] waiting for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.174911  267761 pod_ready.go:94] pod "etcd-embed-certs-106968" is "Ready"
	I1025 09:15:22.174944  267761 pod_ready.go:86] duration metric: took 4.899097ms for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.177358  267761 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.181786  267761 pod_ready.go:94] pod "kube-apiserver-embed-certs-106968" is "Ready"
	I1025 09:15:22.181822  267761 pod_ready.go:86] duration metric: took 4.436379ms for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.183829  267761 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.364905  267761 pod_ready.go:94] pod "kube-controller-manager-embed-certs-106968" is "Ready"
	I1025 09:15:22.364933  267761 pod_ready.go:86] duration metric: took 181.084937ms for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.565796  267761 pod_ready.go:83] waiting for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.964268  267761 pod_ready.go:94] pod "kube-proxy-sm8hw" is "Ready"
	I1025 09:15:22.964293  267761 pod_ready.go:86] duration metric: took 398.467936ms for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.164880  267761 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.565174  267761 pod_ready.go:94] pod "kube-scheduler-embed-certs-106968" is "Ready"
	I1025 09:15:23.565206  267761 pod_ready.go:86] duration metric: took 400.294371ms for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.565222  267761 pod_ready.go:40] duration metric: took 34.9096785s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:23.621826  267761 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:23.624241  267761 out.go:179] * Done! kubectl is now configured to use "embed-certs-106968" cluster and "default" namespace by default
	I1025 09:15:21.341448  279556 provision.go:177] copyRemoteCerts
	I1025 09:15:21.341532  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:21.341608  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:21.362919  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:21.473321  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:21.654106  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 09:15:21.717581  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:15:21.741804  279556 provision.go:87] duration metric: took 1.299167498s to configureAuth
	I1025 09:15:21.741856  279556 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:21.742057  279556 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:21.742325  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:21.768335  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:21.769187  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:21.769223  279556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:22.255810  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:22.255850  279556 machine.go:96] duration metric: took 2.356919433s to provisionDockerMachine
	I1025 09:15:22.255864  279556 client.go:171] duration metric: took 6.469363636s to LocalClient.Create
	I1025 09:15:22.255894  279556 start.go:167] duration metric: took 6.469435334s to libmachine.API.Create "auto-687131"
	I1025 09:15:22.255910  279556 start.go:293] postStartSetup for "auto-687131" (driver="docker")
	I1025 09:15:22.255923  279556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:22.255996  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:22.256050  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.277614  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.387947  279556 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:22.395824  279556 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:22.395865  279556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:22.395879  279556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:22.395950  279556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:22.396136  279556 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:22.396541  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:22.407550  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:22.434048  279556 start.go:296] duration metric: took 178.121274ms for postStartSetup
	I1025 09:15:22.434977  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:22.457420  279556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/config.json ...
	I1025 09:15:22.457771  279556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:22.457824  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.480826  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.584880  279556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:22.590391  279556 start.go:128] duration metric: took 6.806327034s to createHost
	I1025 09:15:22.590431  279556 start.go:83] releasing machines lock for "auto-687131", held for 6.80645362s
	I1025 09:15:22.590493  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:22.610539  279556 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:22.610583  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.610603  279556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:22.610695  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.630329  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.630621  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.798632  279556 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:22.806370  279556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:22.847984  279556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:22.853905  279556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:22.853979  279556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:22.881992  279556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:15:22.882017  279556 start.go:495] detecting cgroup driver to use...
	I1025 09:15:22.882050  279556 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:22.882096  279556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:22.902000  279556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:22.917189  279556 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:22.917246  279556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:22.935738  279556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:22.960242  279556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:23.066373  279556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:23.203040  279556 docker.go:234] disabling docker service ...
	I1025 09:15:23.203110  279556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:23.225691  279556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:23.242722  279556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:23.338881  279556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:23.436201  279556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:23.449397  279556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:23.465144  279556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:23.465208  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.476785  279556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:23.476857  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.486376  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.496079  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.507141  279556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:23.516073  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.526594  279556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.544236  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.554362  279556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:23.563498  279556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:23.572509  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:23.669764  279556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:23.790270  279556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:23.790374  279556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:23.794532  279556 start.go:563] Will wait 60s for crictl version
	I1025 09:15:23.794589  279556 ssh_runner.go:195] Run: which crictl
	I1025 09:15:23.798393  279556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:23.823069  279556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:23.823148  279556 ssh_runner.go:195] Run: crio --version
	I1025 09:15:23.852060  279556 ssh_runner.go:195] Run: crio --version
	I1025 09:15:23.884239  279556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 09:15:19.896862  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:22.390120  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:23.885891  279556 cli_runner.go:164] Run: docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:23.906293  279556 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:15:23.911133  279556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:23.925504  279556 kubeadm.go:883] updating cluster {Name:auto-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:15:23.925712  279556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:23.925784  279556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:23.966169  279556 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:23.966190  279556 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:15:23.966243  279556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:23.994585  279556 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:23.994604  279556 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:15:23.994611  279556 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:15:23.994737  279556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-687131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:15:23.994831  279556 ssh_runner.go:195] Run: crio config
	I1025 09:15:24.046767  279556 cni.go:84] Creating CNI manager for ""
	I1025 09:15:24.046790  279556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:24.046811  279556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:15:24.046837  279556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-687131 NodeName:auto-687131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:15:24.046988  279556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-687131"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:15:24.047063  279556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:15:24.055111  279556 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:15:24.055172  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:15:24.063035  279556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 09:15:24.076837  279556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:15:24.094395  279556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1025 09:15:24.107726  279556 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:15:24.112067  279556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:24.122709  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:24.208028  279556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:24.236216  279556 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131 for IP: 192.168.85.2
	I1025 09:15:24.236238  279556 certs.go:195] generating shared ca certs ...
	I1025 09:15:24.236256  279556 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.236434  279556 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:15:24.236488  279556 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:15:24.236501  279556 certs.go:257] generating profile certs ...
	I1025 09:15:24.236564  279556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key
	I1025 09:15:24.236581  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt with IP's: []
	I1025 09:15:24.928992  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt ...
	I1025 09:15:24.929020  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt: {Name:mk779bd9fdf8eaa5918f81c459f798815b970211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.929218  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key ...
	I1025 09:15:24.929242  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key: {Name:mk46972b19f1fd85299d3aff68dfc355ea581ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.929386  279556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded
	I1025 09:15:24.929408  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:15:25.370687  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded ...
	I1025 09:15:25.370717  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded: {Name:mk758bb25e73fe6bee588c76326f09382b8c326f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.370874  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded ...
	I1025 09:15:25.370888  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded: {Name:mk7ceb126fbb04a31aaba790cb04f339aa54e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.370958  279556 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt
	I1025 09:15:25.371030  279556 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key
	I1025 09:15:25.371087  279556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key
	I1025 09:15:25.371102  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt with IP's: []
	I1025 09:15:25.463911  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt ...
	I1025 09:15:25.463935  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt: {Name:mk4787cbad8c90eaac31b2526653c5fcc02d8be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.464075  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key ...
	I1025 09:15:25.464086  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key: {Name:mk2a6539101452dd3e491062dcc240c2c53ba421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.464280  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:15:25.464315  279556 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:15:25.464324  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:15:25.464345  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:15:25.464370  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:15:25.464393  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:15:25.464431  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:25.464974  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:15:25.483378  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:15:25.501823  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:15:25.520038  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:15:25.539492  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 09:15:25.558370  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:15:22.440164  279928 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.978689163s)
	I1025 09:15:22.440203  279928 kic.go:203] duration metric: took 4.978845546s to extract preloaded images to volume ...
	W1025 09:15:22.440286  279928 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:15:22.440329  279928 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:15:22.440367  279928 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:15:22.506269  279928 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-687131 --name kindnet-687131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-687131 --network kindnet-687131 --ip 192.168.103.2 --volume kindnet-687131:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:15:22.799105  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Running}}
	I1025 09:15:22.820206  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:22.842929  279928 cli_runner.go:164] Run: docker exec kindnet-687131 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:15:22.892617  279928 oci.go:144] the created container "kindnet-687131" has a running status.
	I1025 09:15:22.892659  279928 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa...
	I1025 09:15:23.014325  279928 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:15:23.049009  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:23.070457  279928 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:15:23.070500  279928 kic_runner.go:114] Args: [docker exec --privileged kindnet-687131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:15:23.134243  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:23.158093  279928 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:23.158226  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.181058  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.181403  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.181428  279928 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:23.331938  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-687131
	
	I1025 09:15:23.331970  279928 ubuntu.go:182] provisioning hostname "kindnet-687131"
	I1025 09:15:23.332035  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.353853  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.354132  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.354153  279928 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-687131 && echo "kindnet-687131" | sudo tee /etc/hostname
	I1025 09:15:23.515310  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-687131
	
	I1025 09:15:23.515394  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.537215  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.537527  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.537560  279928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-687131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-687131/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-687131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:23.688101  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:23.688132  279928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:23.688166  279928 ubuntu.go:190] setting up certificates
	I1025 09:15:23.688179  279928 provision.go:84] configureAuth start
	I1025 09:15:23.688244  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:23.709237  279928 provision.go:143] copyHostCerts
	I1025 09:15:23.709298  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:23.709318  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:23.709404  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:23.709548  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:23.709565  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:23.709612  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:23.709727  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:23.709739  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:23.709774  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:23.709864  279928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.kindnet-687131 san=[127.0.0.1 192.168.103.2 kindnet-687131 localhost minikube]
	I1025 09:15:23.878508  279928 provision.go:177] copyRemoteCerts
	I1025 09:15:23.878559  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:23.878599  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.900441  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.009301  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:15:24.031121  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:24.051157  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1025 09:15:24.069764  279928 provision.go:87] duration metric: took 381.568636ms to configureAuth
	I1025 09:15:24.069798  279928 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:24.069969  279928 config.go:182] Loaded profile config "kindnet-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:24.070073  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.091045  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:24.091297  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:24.091319  279928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:24.366841  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:24.366868  279928 machine.go:96] duration metric: took 1.208744926s to provisionDockerMachine
	I1025 09:15:24.366878  279928 client.go:171] duration metric: took 8.120898239s to LocalClient.Create
	I1025 09:15:24.366903  279928 start.go:167] duration metric: took 8.120973439s to libmachine.API.Create "kindnet-687131"
	I1025 09:15:24.366916  279928 start.go:293] postStartSetup for "kindnet-687131" (driver="docker")
	I1025 09:15:24.366927  279928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:24.366989  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:24.367022  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.386435  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.490100  279928 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:24.493952  279928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:24.493982  279928 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:24.493997  279928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:24.494064  279928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:24.494174  279928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:24.494310  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:24.502630  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:24.524399  279928 start.go:296] duration metric: took 157.46682ms for postStartSetup
	I1025 09:15:24.524816  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:24.543897  279928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json ...
	I1025 09:15:24.544201  279928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:24.544248  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.562392  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.660938  279928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:24.666060  279928 start.go:128] duration metric: took 8.422763522s to createHost
	I1025 09:15:24.666089  279928 start.go:83] releasing machines lock for "kindnet-687131", held for 8.422948298s
	I1025 09:15:24.666161  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:24.686558  279928 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:24.686619  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.686618  279928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:24.686694  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.707640  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.707737  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.805204  279928 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:24.861031  279928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:24.899252  279928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:24.904135  279928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:24.904213  279928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:24.931204  279928 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:15:24.931225  279928 start.go:495] detecting cgroup driver to use...
	I1025 09:15:24.931256  279928 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:24.931299  279928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:24.948666  279928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:24.962055  279928 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:24.962115  279928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:24.980169  279928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:24.998963  279928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:25.096394  279928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:25.188449  279928 docker.go:234] disabling docker service ...
	I1025 09:15:25.188539  279928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:25.207995  279928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:25.222036  279928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:25.319414  279928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:25.412233  279928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:25.425899  279928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:25.441635  279928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:25.441709  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.453116  279928 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:25.453188  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.462464  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.471732  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.480919  279928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:25.490188  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.499310  279928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.514357  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.523846  279928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:25.532211  279928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:25.540303  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:25.626699  279928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:25.739482  279928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:25.739551  279928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:25.743863  279928 start.go:563] Will wait 60s for crictl version
	I1025 09:15:25.743922  279928 ssh_runner.go:195] Run: which crictl
	I1025 09:15:25.747790  279928 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:25.774761  279928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:25.774855  279928 ssh_runner.go:195] Run: crio --version
	I1025 09:15:25.809624  279928 ssh_runner.go:195] Run: crio --version
	I1025 09:15:25.841924  279928 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:15:25.843191  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:25.860519  279928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:15:25.864742  279928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:25.875509  279928 kubeadm.go:883] updating cluster {Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:15:25.875665  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:25.875729  279928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:25.913484  279928 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:25.913504  279928 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:15:25.913547  279928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:25.943471  279928 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:25.943492  279928 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:15:25.943500  279928 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:15:25.943574  279928 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-687131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1025 09:15:25.943633  279928 ssh_runner.go:195] Run: crio config
	I1025 09:15:25.993112  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:25.993145  279928 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:15:25.993184  279928 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-687131 NodeName:kindnet-687131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:15:25.993331  279928 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-687131"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:15:25.993383  279928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:15:26.002245  279928 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:15:26.002313  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:15:26.010918  279928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1025 09:15:25.584752  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:15:25.603272  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:15:25.622186  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:15:25.643606  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:15:25.661670  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:15:25.680760  279556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:15:25.695381  279556 ssh_runner.go:195] Run: openssl version
	I1025 09:15:25.701872  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:15:25.711383  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.715855  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.715916  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.753328  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:15:25.762817  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:15:25.773811  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.778344  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.778413  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.821755  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:15:25.831598  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:15:25.840888  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.845139  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.845193  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.882755  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:15:25.894652  279556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:15:25.898800  279556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:15:25.898865  279556 kubeadm.go:400] StartCluster: {Name:auto-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:25.898959  279556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:15:25.899034  279556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:15:25.930732  279556 cri.go:89] found id: ""
	I1025 09:15:25.930809  279556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:15:25.940722  279556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:15:25.949522  279556 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:15:25.949590  279556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:15:25.958156  279556 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:15:25.958183  279556 kubeadm.go:157] found existing configuration files:
	
	I1025 09:15:25.958235  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:15:25.967172  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:15:25.967254  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:15:25.976067  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:15:25.984242  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:15:25.984302  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:15:25.993016  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:15:26.002372  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:15:26.002430  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:15:26.010747  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:15:26.018587  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:15:26.018650  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:15:26.026625  279556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:15:26.066622  279556 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:15:26.066753  279556 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:15:26.087508  279556 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:15:26.087610  279556 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:15:26.087697  279556 kubeadm.go:318] OS: Linux
	I1025 09:15:26.087754  279556 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:15:26.087834  279556 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:15:26.087912  279556 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:15:26.088003  279556 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:15:26.088089  279556 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:15:26.088182  279556 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:15:26.088238  279556 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:15:26.088292  279556 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:15:26.159998  279556 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:15:26.160173  279556 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:15:26.160349  279556 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:15:26.168799  279556 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:15:26.024244  279928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:15:26.039712  279928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 09:15:26.054486  279928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:15:26.058574  279928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:26.069835  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:26.162803  279928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:26.190619  279928 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131 for IP: 192.168.103.2
	I1025 09:15:26.190663  279928 certs.go:195] generating shared ca certs ...
	I1025 09:15:26.190687  279928 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.190849  279928 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:15:26.190912  279928 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:15:26.190926  279928 certs.go:257] generating profile certs ...
	I1025 09:15:26.190998  279928 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key
	I1025 09:15:26.191017  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt with IP's: []
	I1025 09:15:26.219280  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt ...
	I1025 09:15:26.219307  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt: {Name:mk42146df35f32426a420017cd45ab46d2df2c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.219512  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key ...
	I1025 09:15:26.219526  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key: {Name:mka29965ab108f0e622f83908536f26ef739d604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.219659  279928 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2
	I1025 09:15:26.219684  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:15:26.329319  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 ...
	I1025 09:15:26.329363  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2: {Name:mk046cb06650a4e0f6d7e42c28f3d48d22d4b0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.329540  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2 ...
	I1025 09:15:26.329554  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2: {Name:mk530378837f592628c77d98032c76a4244f4436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.329625  279928 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt
	I1025 09:15:26.329742  279928 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key
	I1025 09:15:26.329805  279928 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key
	I1025 09:15:26.329820  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt with IP's: []
	I1025 09:15:26.735246  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt ...
	I1025 09:15:26.735276  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt: {Name:mk782fc69db18d88753465cefca07ee61999cf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.735488  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key ...
	I1025 09:15:26.735505  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key: {Name:mkf8bb93af2e3d11ccf0ab894717b994adb063f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.735728  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:15:26.735765  279928 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:15:26.735772  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:15:26.735795  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:15:26.735827  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:15:26.735849  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:15:26.735888  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:26.736421  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:15:26.755820  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:15:26.774147  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:15:26.792920  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:15:26.811477  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:15:26.830199  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:15:26.848867  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:15:26.868102  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:15:26.887910  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:15:26.909492  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:15:26.927490  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:15:26.944979  279928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:15:26.958195  279928 ssh_runner.go:195] Run: openssl version
	I1025 09:15:26.965063  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:15:26.974977  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:26.979031  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:26.979097  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:27.018091  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:15:27.028873  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:15:27.038925  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.043775  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.043852  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.081032  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:15:27.090311  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:15:27.099410  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.103356  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.103409  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.140571  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:15:27.149701  279928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:15:27.153665  279928 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:15:27.153732  279928 kubeadm.go:400] StartCluster: {Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:27.153809  279928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:15:27.153884  279928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:15:27.183161  279928 cri.go:89] found id: ""
	I1025 09:15:27.183234  279928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:15:27.191544  279928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:15:27.200214  279928 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:15:27.200290  279928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:15:27.208454  279928 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:15:27.208475  279928 kubeadm.go:157] found existing configuration files:
	
	I1025 09:15:27.208526  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:15:27.217396  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:15:27.217456  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:15:27.225670  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:15:27.236151  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:15:27.236214  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:15:27.245161  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:15:27.254460  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:15:27.254531  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:15:27.264877  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:15:27.274289  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:15:27.274375  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:15:27.284912  279928 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:15:27.328789  279928 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:15:27.328867  279928 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:15:27.351178  279928 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:15:27.351294  279928 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:15:27.351391  279928 kubeadm.go:318] OS: Linux
	I1025 09:15:27.351484  279928 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:15:27.351562  279928 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:15:27.351632  279928 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:15:27.351718  279928 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:15:27.351793  279928 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:15:27.351868  279928 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:15:27.351932  279928 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:15:27.351988  279928 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:15:27.422485  279928 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:15:27.422668  279928 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:15:27.422808  279928 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:15:27.430489  279928 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 09:15:24.889176  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:26.889579  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:28.388946  268581 pod_ready.go:94] pod "coredns-66bc5c9577-72zpn" is "Ready"
	I1025 09:15:28.388977  268581 pod_ready.go:86] duration metric: took 37.505736505s for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.392090  268581 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.397093  268581 pod_ready.go:94] pod "etcd-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.397132  268581 pod_ready.go:86] duration metric: took 5.011857ms for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.399595  268581 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.403894  268581 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.403922  268581 pod_ready.go:86] duration metric: took 4.302014ms for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.406153  268581 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.587570  268581 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.587597  268581 pod_ready.go:86] duration metric: took 181.422256ms for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.787005  268581 pod_ready.go:83] waiting for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.187351  268581 pod_ready.go:94] pod "kube-proxy-rmqbr" is "Ready"
	I1025 09:15:29.187384  268581 pod_ready.go:86] duration metric: took 400.350279ms for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.387388  268581 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.787121  268581 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:29.787150  268581 pod_ready.go:86] duration metric: took 399.732519ms for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.787164  268581 pod_ready.go:40] duration metric: took 38.908438746s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:29.833272  268581 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:29.837751  268581 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-891466" cluster and "default" namespace by default
	I1025 09:15:26.172422  279556 out.go:252]   - Generating certificates and keys ...
	I1025 09:15:26.172535  279556 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:15:26.172634  279556 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:15:26.285628  279556 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:15:26.713013  279556 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:15:27.071494  279556 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:15:27.179216  279556 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:15:27.221118  279556 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:15:27.221288  279556 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-687131 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:15:27.928204  279556 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:15:27.928373  279556 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-687131 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:15:28.068848  279556 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:15:28.204926  279556 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:15:28.440284  279556 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:15:28.440376  279556 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:15:28.579490  279556 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:15:28.909219  279556 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:15:29.245788  279556 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:15:29.318242  279556 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:15:29.914745  279556 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:15:29.915521  279556 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:15:29.920405  279556 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:15:29.923766  279556 out.go:252]   - Booting up control plane ...
	I1025 09:15:29.923896  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:15:29.924007  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:15:29.924130  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:15:29.938834  279556 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:15:29.938992  279556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:15:29.947531  279556 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:15:29.947860  279556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:15:29.947903  279556 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:15:30.066710  279556 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:15:30.066882  279556 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:15:27.433789  279928 out.go:252]   - Generating certificates and keys ...
	I1025 09:15:27.433905  279928 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:15:27.434019  279928 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:15:27.635226  279928 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:15:28.010533  279928 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:15:28.223358  279928 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:15:28.339793  279928 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:15:28.504635  279928 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:15:28.504813  279928 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-687131 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:15:28.673200  279928 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:15:28.673381  279928 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-687131 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:15:28.779444  279928 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:15:28.943425  279928 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:15:29.037026  279928 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:15:29.037226  279928 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:15:29.100058  279928 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:15:29.360945  279928 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:15:29.761516  279928 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:15:30.697334  279928 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:15:30.927462  279928 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:15:30.928032  279928 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:15:30.933234  279928 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:15:30.936503  279928 out.go:252]   - Booting up control plane ...
	I1025 09:15:30.936633  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:15:30.936762  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:15:30.936842  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:15:30.949721  279928 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:15:30.949850  279928 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:15:30.956514  279928 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:15:30.956751  279928 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:15:30.956797  279928 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:15:31.067780  279556 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001155245s
	I1025 09:15:31.071443  279556 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:15:31.071574  279556 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:15:31.071722  279556 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:15:31.071865  279556 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:15:32.114667  279556 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.043109229s
	I1025 09:15:33.596604  279556 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.525171723s
	I1025 09:15:35.073741  279556 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00221941s
	I1025 09:15:35.087030  279556 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:15:35.099234  279556 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:15:35.109692  279556 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:15:35.109931  279556 kubeadm.go:318] [mark-control-plane] Marking the node auto-687131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:15:35.119543  279556 kubeadm.go:318] [bootstrap-token] Using token: ds09vj.7po14nmutnpjjt8b
	I1025 09:15:35.121198  279556 out.go:252]   - Configuring RBAC rules ...
	I1025 09:15:35.121342  279556 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:15:35.126177  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:15:35.134866  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:15:35.137736  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:15:35.140350  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:15:35.144165  279556 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:15:35.479861  279556 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:15:31.054706  279928 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:15:31.054857  279928 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:15:32.055728  279928 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001030932s
	I1025 09:15:32.059944  279928 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:15:32.060054  279928 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1025 09:15:32.060171  279928 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:15:32.060273  279928 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:15:33.205829  279928 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.145837798s
	I1025 09:15:33.879959  279928 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.819970792s
	I1025 09:15:35.561861  279928 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501854256s
	I1025 09:15:35.574015  279928 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:15:35.585708  279928 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:15:35.595437  279928 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:15:35.595789  279928 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-687131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:15:35.605853  279928 kubeadm.go:318] [bootstrap-token] Using token: a4kf7c.mn4eyqkotrnz0x3q
	I1025 09:15:35.607340  279928 out.go:252]   - Configuring RBAC rules ...
	I1025 09:15:35.607488  279928 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:15:35.611019  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:15:35.617043  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:15:35.619701  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:15:35.623283  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:15:35.625946  279928 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:15:35.967831  279928 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:15:35.901156  279556 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:15:36.480801  279556 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:15:36.481768  279556 kubeadm.go:318] 
	I1025 09:15:36.481872  279556 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:15:36.481883  279556 kubeadm.go:318] 
	I1025 09:15:36.481998  279556 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:15:36.482009  279556 kubeadm.go:318] 
	I1025 09:15:36.482046  279556 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:15:36.482134  279556 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:15:36.482232  279556 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:15:36.482255  279556 kubeadm.go:318] 
	I1025 09:15:36.482334  279556 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:15:36.482344  279556 kubeadm.go:318] 
	I1025 09:15:36.482421  279556 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:15:36.482432  279556 kubeadm.go:318] 
	I1025 09:15:36.482511  279556 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:15:36.482606  279556 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:15:36.482743  279556 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:15:36.482756  279556 kubeadm.go:318] 
	I1025 09:15:36.482883  279556 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:15:36.482995  279556 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:15:36.483005  279556 kubeadm.go:318] 
	I1025 09:15:36.483113  279556 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ds09vj.7po14nmutnpjjt8b \
	I1025 09:15:36.483287  279556 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:15:36.483321  279556 kubeadm.go:318] 	--control-plane 
	I1025 09:15:36.483329  279556 kubeadm.go:318] 
	I1025 09:15:36.483475  279556 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:15:36.483490  279556 kubeadm.go:318] 
	I1025 09:15:36.483608  279556 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ds09vj.7po14nmutnpjjt8b \
	I1025 09:15:36.483813  279556 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:15:36.486803  279556 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:15:36.486932  279556 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:15:36.486981  279556 cni.go:84] Creating CNI manager for ""
	I1025 09:15:36.486999  279556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:36.488810  279556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:15:36.386755  279928 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:15:36.968068  279928 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:15:36.969126  279928 kubeadm.go:318] 
	I1025 09:15:36.969223  279928 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:15:36.969233  279928 kubeadm.go:318] 
	I1025 09:15:36.969328  279928 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:15:36.969337  279928 kubeadm.go:318] 
	I1025 09:15:36.969387  279928 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:15:36.969446  279928 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:15:36.969488  279928 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:15:36.969504  279928 kubeadm.go:318] 
	I1025 09:15:36.969598  279928 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:15:36.969608  279928 kubeadm.go:318] 
	I1025 09:15:36.969716  279928 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:15:36.969725  279928 kubeadm.go:318] 
	I1025 09:15:36.969769  279928 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:15:36.969873  279928 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:15:36.969975  279928 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:15:36.969984  279928 kubeadm.go:318] 
	I1025 09:15:36.970083  279928 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:15:36.970215  279928 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:15:36.970235  279928 kubeadm.go:318] 
	I1025 09:15:36.970345  279928 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a4kf7c.mn4eyqkotrnz0x3q \
	I1025 09:15:36.970489  279928 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:15:36.970537  279928 kubeadm.go:318] 	--control-plane 
	I1025 09:15:36.970556  279928 kubeadm.go:318] 
	I1025 09:15:36.970702  279928 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:15:36.970713  279928 kubeadm.go:318] 
	I1025 09:15:36.970813  279928 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a4kf7c.mn4eyqkotrnz0x3q \
	I1025 09:15:36.970967  279928 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:15:36.973483  279928 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:15:36.973617  279928 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:15:36.973668  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:36.975438  279928 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:15:36.490181  279556 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:15:36.494821  279556 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:15:36.494840  279556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:15:36.509308  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:15:36.754860  279556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:15:36.754998  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:36.755095  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-687131 minikube.k8s.io/updated_at=2025_10_25T09_15_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=auto-687131 minikube.k8s.io/primary=true
	I1025 09:15:36.779129  279556 ops.go:34] apiserver oom_adj: -16
	I1025 09:15:36.860306  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:37.360419  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:37.860734  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:38.360571  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:38.861452  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:39.360443  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:39.860733  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:40.360346  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 25 09:15:05 embed-certs-106968 crio[562]: time="2025-10-25T09:15:05.225957347Z" level=info msg="Started container" PID=1739 containerID=eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper id=e2ceec34-650c-4730-a700-299a33fa785d name=/runtime.v1.RuntimeService/StartContainer sandboxID=97a81c4bc75b9153cc1f1f33db156a79a2f2c20aeea69cb4bc89abc77f69d0ad
	Oct 25 09:15:05 embed-certs-106968 crio[562]: time="2025-10-25T09:15:05.342012125Z" level=info msg="Removing container: aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b" id=f6c56f28-cbf4-4ff6-b93d-28ddc8223f2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:05 embed-certs-106968 crio[562]: time="2025-10-25T09:15:05.35361242Z" level=info msg="Removed container aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=f6c56f28-cbf4-4ff6-b93d-28ddc8223f2a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.38275117Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f70af73-e3af-4dfc-a388-adc8efdfb54d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.383956892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9c49a2f9-3229-43dd-8699-04d6b16d9b2b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.385628697Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=54ea6799-a21b-4110-9fec-feb8a15ee4f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.385963826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391165846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391384143Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/03ce3abd5bf1346b7def3ff04c725957d5f3356ac21491d0bb40519736dc65bd/merged/etc/passwd: no such file or directory"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391544377Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/03ce3abd5bf1346b7def3ff04c725957d5f3356ac21491d0bb40519736dc65bd/merged/etc/group: no such file or directory"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.391971885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.422018466Z" level=info msg="Created container 3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f: kube-system/storage-provisioner/storage-provisioner" id=54ea6799-a21b-4110-9fec-feb8a15ee4f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.422805261Z" level=info msg="Starting container: 3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f" id=741c03ee-2e0b-4d11-a7fe-668f3af68c2b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:19 embed-certs-106968 crio[562]: time="2025-10-25T09:15:19.425158986Z" level=info msg="Started container" PID=1753 containerID=3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f description=kube-system/storage-provisioner/storage-provisioner id=741c03ee-2e0b-4d11-a7fe-668f3af68c2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf7ee0639585a932c033b8fa6851607e075486e86ea44fc0b3df8f57a2af47a6
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.229155243Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7e39ef24-9310-4621-82d6-aeab79099573 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.230259709Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67b06e2b-2cee-49f5-975e-481cb2089f40 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.231335922Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=cdf2b6dc-6e2d-4b83-a92f-e7724888343f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.231478731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.238028855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.238670117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.272350201Z" level=info msg="Created container 7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=cdf2b6dc-6e2d-4b83-a92f-e7724888343f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.273173655Z" level=info msg="Starting container: 7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97" id=e74ccc8c-cfed-4280-929a-3bb0ad194bf9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.275560987Z" level=info msg="Started container" PID=1787 containerID=7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper id=e74ccc8c-cfed-4280-929a-3bb0ad194bf9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97a81c4bc75b9153cc1f1f33db156a79a2f2c20aeea69cb4bc89abc77f69d0ad
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.409221007Z" level=info msg="Removing container: eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b" id=9b34fa94-e914-4b3b-8c93-3a0e0f0925a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:27 embed-certs-106968 crio[562]: time="2025-10-25T09:15:27.420334886Z" level=info msg="Removed container eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c/dashboard-metrics-scraper" id=9b34fa94-e914-4b3b-8c93-3a0e0f0925a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7ed2d31508da6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   3                   97a81c4bc75b9       dashboard-metrics-scraper-6ffb444bf9-h7z7c   kubernetes-dashboard
	3fe0a355171dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   bf7ee0639585a       storage-provisioner                          kube-system
	a5f2279abdd3d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   5fb798926aa0e       kubernetes-dashboard-855c9754f9-bffzw        kubernetes-dashboard
	0553f0bb1ffb9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   13ba78e35c7fe       coredns-66bc5c9577-dx4j4                     kube-system
	b9eea2497cea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   bf7ee0639585a       storage-provisioner                          kube-system
	7a79aee2c4047       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   069799b8e4f9a       kindnet-cf69x                                kube-system
	771f6d67f00e1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   742f47fb36c62       busybox                                      default
	c7f9b2e31210a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   3fbbb616861e9       kube-proxy-sm8hw                             kube-system
	c648a3db147ad       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   7e2e9a60890f6       kube-apiserver-embed-certs-106968            kube-system
	2ef3d40943865       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   84fd10fbabe9d       kube-scheduler-embed-certs-106968            kube-system
	8c0ca7560cc31       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   ad8eed87c64a6       kube-controller-manager-embed-certs-106968   kube-system
	5f6ebdb3d286f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   007119faf23cb       etcd-embed-certs-106968                      kube-system
	
	
	==> coredns [0553f0bb1ffb9292e667528ee940875c401cef5ffdc7d9d0b2a6254ea2f48bb4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59000 - 9984 "HINFO IN 4838945748492174529.2678795752666801554. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059621298s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-106968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-106968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=embed-certs-106968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_13_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-106968
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:15:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:15:17 +0000   Sat, 25 Oct 2025 09:14:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-106968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a272e628-6722-4504-b4e0-39037ebf73c9
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-dx4j4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 etcd-embed-certs-106968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-cf69x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-embed-certs-106968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-106968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-sm8hw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-embed-certs-106968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h7z7c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bffzw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m13s              kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m20s              kubelet          Node embed-certs-106968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s              kubelet          Node embed-certs-106968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s              kubelet          Node embed-certs-106968 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m16s              node-controller  Node embed-certs-106968 event: Registered Node embed-certs-106968 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-106968 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-106968 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-106968 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-106968 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-106968 event: Registered Node embed-certs-106968 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [5f6ebdb3d286f37cd6ede568d0ef9b8b18e5bcd2de579823ff85eae51b26b151] <==
	{"level":"warn","ts":"2025-10-25T09:14:46.506991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.520878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.527456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.533990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.540890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.547001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.553178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.560110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.567053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.573446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.580468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.587617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.594403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.607375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.614209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.621144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:46.682476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:15:18.265974Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.169595ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:15:18.266115Z","caller":"traceutil/trace.go:172","msg":"trace[1986785789] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:621; }","duration":"117.337435ms","start":"2025-10-25T09:15:18.148755Z","end":"2025-10-25T09:15:18.266093Z","steps":["trace[1986785789] 'range keys from in-memory index tree'  (duration: 117.118505ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:15:18.266549Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.833807ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765720510285700 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-g5sidkm4nzelivkullms6t66ti\" mod_revision:615 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-g5sidkm4nzelivkullms6t66ti\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-g5sidkm4nzelivkullms6t66ti\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:15:18.266632Z","caller":"traceutil/trace.go:172","msg":"trace[2116175180] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"103.244941ms","start":"2025-10-25T09:15:18.163375Z","end":"2025-10-25T09:15:18.266620Z","steps":["trace[2116175180] 'read index received'  (duration: 40.781µs)","trace[2116175180] 'applied index is now lower than readState.Index'  (duration: 103.203415ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:15:18.266788Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.413297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-dx4j4\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-10-25T09:15:18.266776Z","caller":"traceutil/trace.go:172","msg":"trace[333997458] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"205.54282ms","start":"2025-10-25T09:15:18.061216Z","end":"2025-10-25T09:15:18.266759Z","steps":["trace[333997458] 'process raft request'  (duration: 30.910019ms)","trace[333997458] 'compare'  (duration: 173.679139ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:15:18.266849Z","caller":"traceutil/trace.go:172","msg":"trace[230986260] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-dx4j4; range_end:; response_count:1; response_revision:622; }","duration":"103.479548ms","start":"2025-10-25T09:15:18.163361Z","end":"2025-10-25T09:15:18.266841Z","steps":["trace[230986260] 'agreement among raft nodes before linearized reading'  (duration: 103.316847ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:15:20.521515Z","caller":"traceutil/trace.go:172","msg":"trace[457088610] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"126.179071ms","start":"2025-10-25T09:15:20.395315Z","end":"2025-10-25T09:15:20.521494Z","steps":["trace[457088610] 'process raft request'  (duration: 126.025258ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:15:41 up 58 min,  0 user,  load average: 4.81, 3.47, 2.35
	Linux embed-certs-106968 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7a79aee2c4047ff17a490493c6fabf5d9bf45c412c892472070caeb72cab191d] <==
	I1025 09:14:48.779896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:14:48.780204       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:14:48.780371       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:14:48.780389       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:14:48.780416       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:14:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:14:48.982606       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:14:48.982634       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:14:48.982673       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:14:48.982786       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:14:49.575075       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:14:49.575115       1 metrics.go:72] Registering metrics
	I1025 09:14:49.575219       1 controller.go:711] "Syncing nftables rules"
	I1025 09:14:58.982762       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:14:58.982855       1 main.go:301] handling current node
	I1025 09:15:08.984746       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:08.984797       1 main.go:301] handling current node
	I1025 09:15:18.982393       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:18.982451       1 main.go:301] handling current node
	I1025 09:15:28.983136       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:28.983194       1 main.go:301] handling current node
	I1025 09:15:38.985733       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:15:38.985765       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c648a3db147adba437828b8bb877ee3ed46dad5ba403d4d1114c0bb1060d15d1] <==
	I1025 09:14:47.401429       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:14:47.391093       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:14:47.391026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1025 09:14:47.414675       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:14:47.419221       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:14:47.430013       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:14:47.430260       1 policy_source.go:240] refreshing policies
	I1025 09:14:47.447069       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:14:47.476110       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:14:47.485501       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:14:47.487883       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:14:47.488029       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:14:47.487984       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:14:47.498268       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:14:47.859740       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:14:47.893771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:14:47.917663       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:14:47.929008       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:14:47.936544       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:14:47.974013       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.241.171"}
	I1025 09:14:47.987901       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.136.50"}
	I1025 09:14:48.291818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:14:50.804158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:14:51.203652       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:14:51.253993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8c0ca7560cc31a31d55fa3e6598cfaffb772455fa1a71284e0cc016b5d7ca083] <==
	I1025 09:14:50.750666       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:14:50.750759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:14:50.750767       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:14:50.750773       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:14:50.750781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:14:50.753016       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:14:50.754014       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:14:50.754023       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:14:50.755182       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:14:50.757505       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:14:50.758977       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:50.759073       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:14:50.760264       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:14:50.762515       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:14:50.762611       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:14:50.762703       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-106968"
	I1025 09:14:50.762767       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:14:50.764857       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:14:50.767102       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:14:50.768274       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:14:50.769416       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:14:50.771701       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:14:50.772897       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:14:50.777169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:14:50.784468       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c7f9b2e31210a0e8cec194cd09bb4249f8bdfccefdcdfc0247b7045f2826a78c] <==
	I1025 09:14:48.647962       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:14:48.712859       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:14:48.814792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:14:48.814842       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 09:14:48.814945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:14:48.835486       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:14:48.835544       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:14:48.840958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:14:48.841347       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:14:48.841369       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:48.842789       1 config.go:200] "Starting service config controller"
	I1025 09:14:48.842823       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:14:48.842868       1 config.go:309] "Starting node config controller"
	I1025 09:14:48.842879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:14:48.842988       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:14:48.843005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:14:48.843036       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:14:48.843045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:14:48.943423       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:14:48.943447       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:14:48.943492       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:14:48.943549       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2ef3d4094386517bb13e629728d51979ce32350e4cc4fdc820576cb2101fd8b5] <==
	I1025 09:14:45.603158       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:14:47.358269       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:14:47.358306       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:14:47.358334       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:14:47.358345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:14:47.390880       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:14:47.390914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:47.400081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:14:47.400244       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:47.400260       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:47.400282       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:14:47.500971       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:14:55 embed-certs-106968 kubelet[717]: I1025 09:14:55.293192     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:14:55 embed-certs-106968 kubelet[717]: E1025 09:14:55.293400     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:14:55 embed-certs-106968 kubelet[717]: I1025 09:14:55.293775     717 scope.go:117] "RemoveContainer" containerID="018de8aa7e9d4f0baf21f752e1e259f5298689ed1a4e60f4cc8e058d651de890"
	Oct 25 09:14:56 embed-certs-106968 kubelet[717]: I1025 09:14:56.299330     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:14:56 embed-certs-106968 kubelet[717]: E1025 09:14:56.299534     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:14:57 embed-certs-106968 kubelet[717]: I1025 09:14:57.302016     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:14:57 embed-certs-106968 kubelet[717]: E1025 09:14:57.302191     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:14:58 embed-certs-106968 kubelet[717]: I1025 09:14:58.325204     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bffzw" podStartSLOduration=0.972322642 podStartE2EDuration="7.325175402s" podCreationTimestamp="2025-10-25 09:14:51 +0000 UTC" firstStartedPulling="2025-10-25 09:14:51.701816148 +0000 UTC m=+7.583621174" lastFinishedPulling="2025-10-25 09:14:58.054668903 +0000 UTC m=+13.936473934" observedRunningTime="2025-10-25 09:14:58.324612875 +0000 UTC m=+14.206417919" watchObservedRunningTime="2025-10-25 09:14:58.325175402 +0000 UTC m=+14.206980445"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: I1025 09:15:05.170109     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: I1025 09:15:05.339872     717 scope.go:117] "RemoveContainer" containerID="aecbc99fd79719ba82dc476c4094b31880dded638a2ec89d9ffceaf40a0e699b"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: I1025 09:15:05.340155     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:05 embed-certs-106968 kubelet[717]: E1025 09:15:05.340412     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:15 embed-certs-106968 kubelet[717]: I1025 09:15:15.171018     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:15 embed-certs-106968 kubelet[717]: E1025 09:15:15.171268     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:19 embed-certs-106968 kubelet[717]: I1025 09:15:19.382247     717 scope.go:117] "RemoveContainer" containerID="b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: I1025 09:15:27.228580     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: I1025 09:15:27.407913     717 scope.go:117] "RemoveContainer" containerID="eec02f332bfa5237b7bc9a42203adcbe12468e662d63cf1364da3a24e4365c0b"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: I1025 09:15:27.408213     717 scope.go:117] "RemoveContainer" containerID="7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	Oct 25 09:15:27 embed-certs-106968 kubelet[717]: E1025 09:15:27.408455     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:35 embed-certs-106968 kubelet[717]: I1025 09:15:35.170706     717 scope.go:117] "RemoveContainer" containerID="7ed2d31508da6ad3c13680d63fb2e7e22c51f5a0977aab692b0468aff5582e97"
	Oct 25 09:15:35 embed-certs-106968 kubelet[717]: E1025 09:15:35.170935     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h7z7c_kubernetes-dashboard(b0759fc5-436f-4c7b-b2f2-d48359189d53)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h7z7c" podUID="b0759fc5-436f-4c7b-b2f2-d48359189d53"
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:15:36 embed-certs-106968 systemd[1]: kubelet.service: Consumed 1.824s CPU time.
	
	
	==> kubernetes-dashboard [a5f2279abdd3d8573970804fa06c858ff73b788144c0c791ed73128c4381f6d0] <==
	2025/10/25 09:14:58 Starting overwatch
	2025/10/25 09:14:58 Using namespace: kubernetes-dashboard
	2025/10/25 09:14:58 Using in-cluster config to connect to apiserver
	2025/10/25 09:14:58 Using secret token for csrf signing
	2025/10/25 09:14:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:14:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:14:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:14:58 Generating JWE encryption key
	2025/10/25 09:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:14:58 Initializing JWE encryption key from synchronized object
	2025/10/25 09:14:58 Creating in-cluster Sidecar client
	2025/10/25 09:14:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:14:58 Serving insecurely on HTTP port: 9090
	2025/10/25 09:15:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3fe0a355171dd224fa43806ab55b14ef4e60d58c0b0bdcc93b8e0ab1c122d62f] <==
	I1025 09:15:19.440603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:15:19.450335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:15:19.450385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:15:19.453435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:22.910039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:27.170885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:30.769196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:33.822441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.845215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.851961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:36.852190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:15:36.852259       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e170d88-5532-46a5-99b3-fc8a977a4e4b", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-106968_56da4eb5-92f6-4f7a-a4f4-75ada9c31b6b became leader
	I1025 09:15:36.852453       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-106968_56da4eb5-92f6-4f7a-a4f4-75ada9c31b6b!
	W1025 09:15:36.855879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.861116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:36.953535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-106968_56da4eb5-92f6-4f7a-a4f4-75ada9c31b6b!
	W1025 09:15:38.866138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:38.871058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:40.875757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:40.880408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b9eea2497cea5220336461976fd7a8b5dc1b5ffee643fdef046f11ca9427edd6] <==
	I1025 09:14:48.609001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:15:18.615091       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106968 -n embed-certs-106968
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-106968 -n embed-certs-106968: exit status 2 (493.129849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-106968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-891466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-891466 --alsologtostderr -v=1: exit status 80 (2.338617889s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-891466 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:15:42.705544  286936 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:42.705843  286936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:42.705854  286936 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:42.705858  286936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:42.706041  286936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:42.706283  286936 out.go:368] Setting JSON to false
	I1025 09:15:42.706337  286936 mustload.go:65] Loading cluster: default-k8s-diff-port-891466
	I1025 09:15:42.706771  286936 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:42.707140  286936 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-891466 --format={{.State.Status}}
	I1025 09:15:42.727666  286936 host.go:66] Checking if "default-k8s-diff-port-891466" exists ...
	I1025 09:15:42.727930  286936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:42.801107  286936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:15:42.788275615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:42.801910  286936 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-891466 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:15:42.803783  286936 out.go:179] * Pausing node default-k8s-diff-port-891466 ... 
	I1025 09:15:42.805307  286936 host.go:66] Checking if "default-k8s-diff-port-891466" exists ...
	I1025 09:15:42.805571  286936 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:42.805624  286936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-891466
	I1025 09:15:42.829235  286936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/default-k8s-diff-port-891466/id_rsa Username:docker}
	I1025 09:15:42.938273  286936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:42.959091  286936 pause.go:52] kubelet running: true
	I1025 09:15:42.959162  286936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:43.157909  286936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:43.158018  286936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:43.234976  286936 cri.go:89] found id: "a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d"
	I1025 09:15:43.235012  286936 cri.go:89] found id: "ab827ee7537580129a5443a427008b45db6bea12d0e1320adb16f5314fd100da"
	I1025 09:15:43.235016  286936 cri.go:89] found id: "2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f"
	I1025 09:15:43.235019  286936 cri.go:89] found id: "e1ab809e55dad3c3b367621a2d2b4a7a079dcbfc73c1c5023db8aeba72f7c648"
	I1025 09:15:43.235022  286936 cri.go:89] found id: "2315c753ecdae32bd3c2309c84279ae635e349a3bd022e9ca8e253e5ad725ccb"
	I1025 09:15:43.235027  286936 cri.go:89] found id: "0b4273672045197aa9930a7861b7ea9c702bee1c1761abe1fac0ba82696ba0bb"
	I1025 09:15:43.235032  286936 cri.go:89] found id: "e554bff30a14261e8aba9d0b797b3aa317f80c74e0ea6c81ce9fc3a7956a1e40"
	I1025 09:15:43.235035  286936 cri.go:89] found id: "fd90aba5098707e9b4565da4efbbb072612744bbe8babcb4796b4df48b81c1bc"
	I1025 09:15:43.235039  286936 cri.go:89] found id: "ad9cca7cd898cabfdf3a0ac2e99271e2139eef9d4a535d762fe568acfcd007ea"
	I1025 09:15:43.235054  286936 cri.go:89] found id: "c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	I1025 09:15:43.235059  286936 cri.go:89] found id: "cbc2c58c4b15cc3dd1f62a796ae52abc67a963715dca52306484371b9990aaf3"
	I1025 09:15:43.235063  286936 cri.go:89] found id: ""
	I1025 09:15:43.235109  286936 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:43.247321  286936 retry.go:31] will retry after 165.985318ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:43Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:43.413783  286936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:43.427396  286936 pause.go:52] kubelet running: false
	I1025 09:15:43.427473  286936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:43.573000  286936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:43.573085  286936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:43.640778  286936 cri.go:89] found id: "a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d"
	I1025 09:15:43.640803  286936 cri.go:89] found id: "ab827ee7537580129a5443a427008b45db6bea12d0e1320adb16f5314fd100da"
	I1025 09:15:43.640808  286936 cri.go:89] found id: "2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f"
	I1025 09:15:43.640812  286936 cri.go:89] found id: "e1ab809e55dad3c3b367621a2d2b4a7a079dcbfc73c1c5023db8aeba72f7c648"
	I1025 09:15:43.640816  286936 cri.go:89] found id: "2315c753ecdae32bd3c2309c84279ae635e349a3bd022e9ca8e253e5ad725ccb"
	I1025 09:15:43.640821  286936 cri.go:89] found id: "0b4273672045197aa9930a7861b7ea9c702bee1c1761abe1fac0ba82696ba0bb"
	I1025 09:15:43.640825  286936 cri.go:89] found id: "e554bff30a14261e8aba9d0b797b3aa317f80c74e0ea6c81ce9fc3a7956a1e40"
	I1025 09:15:43.640830  286936 cri.go:89] found id: "fd90aba5098707e9b4565da4efbbb072612744bbe8babcb4796b4df48b81c1bc"
	I1025 09:15:43.640834  286936 cri.go:89] found id: "ad9cca7cd898cabfdf3a0ac2e99271e2139eef9d4a535d762fe568acfcd007ea"
	I1025 09:15:43.640843  286936 cri.go:89] found id: "c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	I1025 09:15:43.640852  286936 cri.go:89] found id: "cbc2c58c4b15cc3dd1f62a796ae52abc67a963715dca52306484371b9990aaf3"
	I1025 09:15:43.640856  286936 cri.go:89] found id: ""
	I1025 09:15:43.640906  286936 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:43.653524  286936 retry.go:31] will retry after 496.316666ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:43Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:44.150872  286936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:44.166320  286936 pause.go:52] kubelet running: false
	I1025 09:15:44.166383  286936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:44.324490  286936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:44.324573  286936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:44.394559  286936 cri.go:89] found id: "a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d"
	I1025 09:15:44.394590  286936 cri.go:89] found id: "ab827ee7537580129a5443a427008b45db6bea12d0e1320adb16f5314fd100da"
	I1025 09:15:44.394595  286936 cri.go:89] found id: "2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f"
	I1025 09:15:44.394600  286936 cri.go:89] found id: "e1ab809e55dad3c3b367621a2d2b4a7a079dcbfc73c1c5023db8aeba72f7c648"
	I1025 09:15:44.394605  286936 cri.go:89] found id: "2315c753ecdae32bd3c2309c84279ae635e349a3bd022e9ca8e253e5ad725ccb"
	I1025 09:15:44.394610  286936 cri.go:89] found id: "0b4273672045197aa9930a7861b7ea9c702bee1c1761abe1fac0ba82696ba0bb"
	I1025 09:15:44.394614  286936 cri.go:89] found id: "e554bff30a14261e8aba9d0b797b3aa317f80c74e0ea6c81ce9fc3a7956a1e40"
	I1025 09:15:44.394619  286936 cri.go:89] found id: "fd90aba5098707e9b4565da4efbbb072612744bbe8babcb4796b4df48b81c1bc"
	I1025 09:15:44.394624  286936 cri.go:89] found id: "ad9cca7cd898cabfdf3a0ac2e99271e2139eef9d4a535d762fe568acfcd007ea"
	I1025 09:15:44.394631  286936 cri.go:89] found id: "c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	I1025 09:15:44.394635  286936 cri.go:89] found id: "cbc2c58c4b15cc3dd1f62a796ae52abc67a963715dca52306484371b9990aaf3"
	I1025 09:15:44.394653  286936 cri.go:89] found id: ""
	I1025 09:15:44.394698  286936 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:44.406786  286936 retry.go:31] will retry after 292.469207ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:44Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:15:44.700372  286936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:15:44.714409  286936 pause.go:52] kubelet running: false
	I1025 09:15:44.714470  286936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:15:44.862493  286936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:15:44.862565  286936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:15:44.936603  286936 cri.go:89] found id: "a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d"
	I1025 09:15:44.936629  286936 cri.go:89] found id: "ab827ee7537580129a5443a427008b45db6bea12d0e1320adb16f5314fd100da"
	I1025 09:15:44.936633  286936 cri.go:89] found id: "2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f"
	I1025 09:15:44.936636  286936 cri.go:89] found id: "e1ab809e55dad3c3b367621a2d2b4a7a079dcbfc73c1c5023db8aeba72f7c648"
	I1025 09:15:44.936678  286936 cri.go:89] found id: "2315c753ecdae32bd3c2309c84279ae635e349a3bd022e9ca8e253e5ad725ccb"
	I1025 09:15:44.936684  286936 cri.go:89] found id: "0b4273672045197aa9930a7861b7ea9c702bee1c1761abe1fac0ba82696ba0bb"
	I1025 09:15:44.936688  286936 cri.go:89] found id: "e554bff30a14261e8aba9d0b797b3aa317f80c74e0ea6c81ce9fc3a7956a1e40"
	I1025 09:15:44.936693  286936 cri.go:89] found id: "fd90aba5098707e9b4565da4efbbb072612744bbe8babcb4796b4df48b81c1bc"
	I1025 09:15:44.936697  286936 cri.go:89] found id: "ad9cca7cd898cabfdf3a0ac2e99271e2139eef9d4a535d762fe568acfcd007ea"
	I1025 09:15:44.936705  286936 cri.go:89] found id: "c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	I1025 09:15:44.936710  286936 cri.go:89] found id: "cbc2c58c4b15cc3dd1f62a796ae52abc67a963715dca52306484371b9990aaf3"
	I1025 09:15:44.936714  286936 cri.go:89] found id: ""
	I1025 09:15:44.936751  286936 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:15:44.960536  286936 out.go:203] 
	W1025 09:15:44.962958  286936 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:15:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:15:44.962996  286936 out.go:285] * 
	* 
	W1025 09:15:44.966964  286936 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:15:44.969059  286936 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-891466 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-891466
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-891466:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d",
	        "Created": "2025-10-25T09:13:33.96941541Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:39.491794999Z",
	            "FinishedAt": "2025-10-25T09:14:38.636263808Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/hosts",
	        "LogPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d-json.log",
	        "Name": "/default-k8s-diff-port-891466",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-891466:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-891466",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d",
	                "LowerDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-891466",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-891466/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-891466",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-891466",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-891466",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95a5e9e3d8e9a1a53a22224479a59f3b032e2d5100ad3aef45f5b731747003fc",
	            "SandboxKey": "/var/run/docker/netns/95a5e9e3d8e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-891466": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:bf:bd:f6:94:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0da8ae663923a6a96619f04827a51fa66502ca86c536d48116f797af6b2cd6f",
	                    "EndpointID": "e503a3bca52ae8e16d514ef2ff2badceda4f12737bf9e481448da026b9a0ef0d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-891466",
	                        "f52ce971b3b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466: exit status 2 (347.918318ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-891466 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-891466 logs -n 25: (1.260098857s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-106968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-891466 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p newest-cni-036155 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ image   │ newest-cni-036155 image list --format=json                                                                                                                                                                                                    │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p newest-cni-036155 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p auto-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-687131                  │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p kindnet-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-687131               │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ image   │ embed-certs-106968 image list --format=json                                                                                                                                                                                                   │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p embed-certs-106968 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ image   │ default-k8s-diff-port-891466 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p embed-certs-106968                                                                                                                                                                                                                         │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-891466 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:16.020787  279928 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:16.021157  279928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:16.021171  279928 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:16.021178  279928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:16.021473  279928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:16.022216  279928 out.go:368] Setting JSON to false
	I1025 09:15:16.023688  279928 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3464,"bootTime":1761380252,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:16.023798  279928 start.go:141] virtualization: kvm guest
	I1025 09:15:16.026632  279928 out.go:179] * [kindnet-687131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:16.028561  279928 notify.go:220] Checking for updates...
	I1025 09:15:16.028593  279928 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:15:16.030119  279928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:16.031829  279928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:16.033381  279928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:15:16.034874  279928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:16.036503  279928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:16.038554  279928 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038660  279928 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038733  279928 config.go:182] Loaded profile config "embed-certs-106968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:16.038820  279928 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:16.066342  279928 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:16.066508  279928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:16.134706  279928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:68 SystemTime:2025-10-25 09:15:16.122944363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:16.134814  279928 docker.go:318] overlay module found
	I1025 09:15:16.137572  279928 out.go:179] * Using the docker driver based on user configuration
	I1025 09:15:16.140435  279928 start.go:305] selected driver: docker
	I1025 09:15:16.140457  279928 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:16.140470  279928 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:16.141086  279928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:16.207410  279928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-25 09:15:16.195269689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:16.207685  279928 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:16.207951  279928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:15:16.210244  279928 out.go:179] * Using Docker driver with root privileges
	I1025 09:15:16.211682  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:16.211710  279928 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:15:16.211813  279928 start.go:349] cluster config:
	{Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:16.213496  279928 out.go:179] * Starting "kindnet-687131" primary control-plane node in "kindnet-687131" cluster
	I1025 09:15:16.214878  279928 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:15:16.216267  279928 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:16.217483  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:16.217519  279928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:16.217533  279928 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:15:16.217544  279928 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:16.217693  279928 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:16.217707  279928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:15:16.217850  279928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json ...
	I1025 09:15:16.217881  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json: {Name:mk59edad4f0461fbcf9ec630103ca3869ab6269c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:16.242933  279928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:15:16.242960  279928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:15:16.242982  279928 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:15:16.243012  279928 start.go:360] acquireMachinesLock for kindnet-687131: {Name:mk9e87ffb8b828e3d740e3d2456d3f613e75798f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:15:16.243126  279928 start.go:364] duration metric: took 91.55µs to acquireMachinesLock for "kindnet-687131"
	I1025 09:15:16.243170  279928 start.go:93] Provisioning new machine with config: &{Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:15:16.243276  279928 start.go:125] createHost starting for "" (driver="docker")
	W1025 09:15:14.166974  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:16.172374  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:15.890048  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:17.890391  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:15.786223  279556 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:15:15.786457  279556 start.go:159] libmachine.API.Create for "auto-687131" (driver="docker")
	I1025 09:15:15.786489  279556 client.go:168] LocalClient.Create starting
	I1025 09:15:15.786579  279556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:15:15.786623  279556 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:15.786675  279556 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:15.786756  279556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:15:15.786785  279556 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:15.786803  279556 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:15.787187  279556 cli_runner.go:164] Run: docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:15:15.806182  279556 cli_runner.go:211] docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:15:15.806242  279556 network_create.go:284] running [docker network inspect auto-687131] to gather additional debugging logs...
	I1025 09:15:15.806261  279556 cli_runner.go:164] Run: docker network inspect auto-687131
	W1025 09:15:15.827929  279556 cli_runner.go:211] docker network inspect auto-687131 returned with exit code 1
	I1025 09:15:15.827975  279556 network_create.go:287] error running [docker network inspect auto-687131]: docker network inspect auto-687131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-687131 not found
	I1025 09:15:15.827997  279556 network_create.go:289] output of [docker network inspect auto-687131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-687131 not found
	
	** /stderr **
	I1025 09:15:15.828184  279556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:15.847440  279556 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:15:15.848061  279556 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:15:15.848790  279556 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:15:15.849253  279556 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:15:15.850068  279556 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e752b0}
	I1025 09:15:15.850116  279556 network_create.go:124] attempt to create docker network auto-687131 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 09:15:15.850193  279556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-687131 auto-687131
	I1025 09:15:15.916274  279556 network_create.go:108] docker network auto-687131 192.168.85.0/24 created
	I1025 09:15:15.916314  279556 kic.go:121] calculated static IP "192.168.85.2" for the "auto-687131" container
	I1025 09:15:15.916418  279556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:15:15.937311  279556 cli_runner.go:164] Run: docker volume create auto-687131 --label name.minikube.sigs.k8s.io=auto-687131 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:15:15.958005  279556 oci.go:103] Successfully created a docker volume auto-687131
	I1025 09:15:15.958109  279556 cli_runner.go:164] Run: docker run --rm --name auto-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-687131 --entrypoint /usr/bin/test -v auto-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:15:16.396685  279556 oci.go:107] Successfully prepared a docker volume auto-687131
	I1025 09:15:16.396740  279556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:16.396765  279556 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:15:16.396833  279556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:15:19.141617  279556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (2.744742156s)
	I1025 09:15:19.141672  279556 kic.go:203] duration metric: took 2.74490357s to extract preloaded images to volume ...
	W1025 09:15:19.141768  279556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:15:19.141825  279556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:15:19.141868  279556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:15:19.210146  279556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-687131 --name auto-687131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-687131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-687131 --network auto-687131 --ip 192.168.85.2 --volume auto-687131:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:15:19.547183  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Running}}
	I1025 09:15:19.568747  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.588991  279556 cli_runner.go:164] Run: docker exec auto-687131 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:15:19.639905  279556 oci.go:144] the created container "auto-687131" has a running status.
	I1025 09:15:19.639945  279556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa...
	I1025 09:15:19.760291  279556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:15:19.795261  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.821632  279556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:15:19.821699  279556 kic_runner.go:114] Args: [docker exec --privileged auto-687131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:15:19.870801  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:19.898909  279556 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:19.899009  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:19.922667  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:19.923027  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:19.923059  279556 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:20.067753  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-687131
	
	I1025 09:15:20.067781  279556 ubuntu.go:182] provisioning hostname "auto-687131"
	I1025 09:15:20.067841  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:20.086111  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:20.086338  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:20.086354  279556 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-687131 && echo "auto-687131" | sudo tee /etc/hostname
	I1025 09:15:20.271814  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-687131
	
	I1025 09:15:20.271897  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:20.292274  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:20.292587  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:20.292623  279556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-687131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-687131/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-687131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:20.442537  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:20.442571  279556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:20.442604  279556 ubuntu.go:190] setting up certificates
	I1025 09:15:20.442619  279556 provision.go:84] configureAuth start
	I1025 09:15:20.442691  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:20.460617  279556 provision.go:143] copyHostCerts
	I1025 09:15:20.460717  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:20.460730  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:20.510975  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:20.511209  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:20.511225  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:20.511278  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:20.511407  279556 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:20.511419  279556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:20.511456  279556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:20.511555  279556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.auto-687131 san=[127.0.0.1 192.168.85.2 auto-687131 localhost minikube]
	I1025 09:15:16.245622  279928 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:15:16.245926  279928 start.go:159] libmachine.API.Create for "kindnet-687131" (driver="docker")
	I1025 09:15:16.245971  279928 client.go:168] LocalClient.Create starting
	I1025 09:15:16.246054  279928 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem
	I1025 09:15:16.246095  279928 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:16.246115  279928 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:16.246201  279928 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem
	I1025 09:15:16.246246  279928 main.go:141] libmachine: Decoding PEM data...
	I1025 09:15:16.246267  279928 main.go:141] libmachine: Parsing certificate...
	I1025 09:15:16.246894  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:15:16.270502  279928 cli_runner.go:211] docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:15:16.270577  279928 network_create.go:284] running [docker network inspect kindnet-687131] to gather additional debugging logs...
	I1025 09:15:16.270592  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131
	W1025 09:15:16.290826  279928 cli_runner.go:211] docker network inspect kindnet-687131 returned with exit code 1
	I1025 09:15:16.290865  279928 network_create.go:287] error running [docker network inspect kindnet-687131]: docker network inspect kindnet-687131: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-687131 not found
	I1025 09:15:16.290881  279928 network_create.go:289] output of [docker network inspect kindnet-687131]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-687131 not found
	
	** /stderr **
	I1025 09:15:16.290987  279928 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:16.314287  279928 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
	I1025 09:15:16.315250  279928 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2070549be1c5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:59:32:48:d5:bf} reservation:<nil>}
	I1025 09:15:16.316258  279928 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0f2dcb5e1e3a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:0a:35:fa:46:d2} reservation:<nil>}
	I1025 09:15:16.316988  279928 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0da8ae66392 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:12:a1:a5:30:89} reservation:<nil>}
	I1025 09:15:16.317865  279928 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-427f290f6b13 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:07:d0:a1:54:23} reservation:<nil>}
	I1025 09:15:16.318520  279928 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-5d58a21465e1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4e:78:a8:09:a3:02} reservation:<nil>}
	I1025 09:15:16.319390  279928 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe0500}
	I1025 09:15:16.319416  279928 network_create.go:124] attempt to create docker network kindnet-687131 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:15:16.319460  279928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-687131 kindnet-687131
	I1025 09:15:16.397907  279928 network_create.go:108] docker network kindnet-687131 192.168.103.0/24 created
	I1025 09:15:16.397939  279928 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-687131" container
	I1025 09:15:16.397993  279928 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:15:16.417914  279928 cli_runner.go:164] Run: docker volume create kindnet-687131 --label name.minikube.sigs.k8s.io=kindnet-687131 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:15:16.437974  279928 oci.go:103] Successfully created a docker volume kindnet-687131
	I1025 09:15:16.438054  279928 cli_runner.go:164] Run: docker run --rm --name kindnet-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --entrypoint /usr/bin/test -v kindnet-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:15:17.461263  279928 cli_runner.go:217] Completed: docker run --rm --name kindnet-687131-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --entrypoint /usr/bin/test -v kindnet-687131:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.023162971s)
	I1025 09:15:17.461305  279928 oci.go:107] Successfully prepared a docker volume kindnet-687131
	I1025 09:15:17.461333  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:17.461353  279928 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:15:17.461430  279928 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:15:18.301233  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	W1025 09:15:20.666718  267761 pod_ready.go:104] pod "coredns-66bc5c9577-dx4j4" is not "Ready", error: <nil>
	I1025 09:15:22.166607  267761 pod_ready.go:94] pod "coredns-66bc5c9577-dx4j4" is "Ready"
	I1025 09:15:22.166687  267761 pod_ready.go:86] duration metric: took 33.505954367s for pod "coredns-66bc5c9577-dx4j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.170010  267761 pod_ready.go:83] waiting for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.174911  267761 pod_ready.go:94] pod "etcd-embed-certs-106968" is "Ready"
	I1025 09:15:22.174944  267761 pod_ready.go:86] duration metric: took 4.899097ms for pod "etcd-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.177358  267761 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.181786  267761 pod_ready.go:94] pod "kube-apiserver-embed-certs-106968" is "Ready"
	I1025 09:15:22.181822  267761 pod_ready.go:86] duration metric: took 4.436379ms for pod "kube-apiserver-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.183829  267761 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.364905  267761 pod_ready.go:94] pod "kube-controller-manager-embed-certs-106968" is "Ready"
	I1025 09:15:22.364933  267761 pod_ready.go:86] duration metric: took 181.084937ms for pod "kube-controller-manager-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.565796  267761 pod_ready.go:83] waiting for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:22.964268  267761 pod_ready.go:94] pod "kube-proxy-sm8hw" is "Ready"
	I1025 09:15:22.964293  267761 pod_ready.go:86] duration metric: took 398.467936ms for pod "kube-proxy-sm8hw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.164880  267761 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.565174  267761 pod_ready.go:94] pod "kube-scheduler-embed-certs-106968" is "Ready"
	I1025 09:15:23.565206  267761 pod_ready.go:86] duration metric: took 400.294371ms for pod "kube-scheduler-embed-certs-106968" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:23.565222  267761 pod_ready.go:40] duration metric: took 34.9096785s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:23.621826  267761 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:23.624241  267761 out.go:179] * Done! kubectl is now configured to use "embed-certs-106968" cluster and "default" namespace by default
	I1025 09:15:21.341448  279556 provision.go:177] copyRemoteCerts
	I1025 09:15:21.341532  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:21.341608  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:21.362919  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:21.473321  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:21.654106  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 09:15:21.717581  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:15:21.741804  279556 provision.go:87] duration metric: took 1.299167498s to configureAuth
	I1025 09:15:21.741856  279556 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:21.742057  279556 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:21.742325  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:21.768335  279556 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:21.769187  279556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1025 09:15:21.769223  279556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:22.255810  279556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:22.255850  279556 machine.go:96] duration metric: took 2.356919433s to provisionDockerMachine
	I1025 09:15:22.255864  279556 client.go:171] duration metric: took 6.469363636s to LocalClient.Create
	I1025 09:15:22.255894  279556 start.go:167] duration metric: took 6.469435334s to libmachine.API.Create "auto-687131"
	I1025 09:15:22.255910  279556 start.go:293] postStartSetup for "auto-687131" (driver="docker")
	I1025 09:15:22.255923  279556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:22.255996  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:22.256050  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.277614  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.387947  279556 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:22.395824  279556 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:22.395865  279556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:22.395879  279556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:22.395950  279556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:22.396136  279556 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:22.396541  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:22.407550  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:22.434048  279556 start.go:296] duration metric: took 178.121274ms for postStartSetup
	I1025 09:15:22.434977  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:22.457420  279556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/config.json ...
	I1025 09:15:22.457771  279556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:22.457824  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.480826  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.584880  279556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:22.590391  279556 start.go:128] duration metric: took 6.806327034s to createHost
	I1025 09:15:22.590431  279556 start.go:83] releasing machines lock for "auto-687131", held for 6.80645362s
	I1025 09:15:22.590493  279556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-687131
	I1025 09:15:22.610539  279556 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:22.610583  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.610603  279556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:22.610695  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:22.630329  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.630621  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:22.798632  279556 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:22.806370  279556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:22.847984  279556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:22.853905  279556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:22.853979  279556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:22.881992  279556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:15:22.882017  279556 start.go:495] detecting cgroup driver to use...
	I1025 09:15:22.882050  279556 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:22.882096  279556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:22.902000  279556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:22.917189  279556 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:22.917246  279556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:22.935738  279556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:22.960242  279556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:23.066373  279556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:23.203040  279556 docker.go:234] disabling docker service ...
	I1025 09:15:23.203110  279556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:23.225691  279556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:23.242722  279556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:23.338881  279556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:23.436201  279556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:23.449397  279556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:23.465144  279556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:23.465208  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.476785  279556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:23.476857  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.486376  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.496079  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.507141  279556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:23.516073  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.526594  279556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.544236  279556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:23.554362  279556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:23.563498  279556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:23.572509  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:23.669764  279556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:23.790270  279556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:23.790374  279556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:23.794532  279556 start.go:563] Will wait 60s for crictl version
	I1025 09:15:23.794589  279556 ssh_runner.go:195] Run: which crictl
	I1025 09:15:23.798393  279556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:23.823069  279556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:23.823148  279556 ssh_runner.go:195] Run: crio --version
	I1025 09:15:23.852060  279556 ssh_runner.go:195] Run: crio --version
	I1025 09:15:23.884239  279556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 09:15:19.896862  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:22.390120  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:23.885891  279556 cli_runner.go:164] Run: docker network inspect auto-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:23.906293  279556 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:15:23.911133  279556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:23.925504  279556 kubeadm.go:883] updating cluster {Name:auto-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:15:23.925712  279556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:23.925784  279556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:23.966169  279556 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:23.966190  279556 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:15:23.966243  279556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:23.994585  279556 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:23.994604  279556 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:15:23.994611  279556 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:15:23.994737  279556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-687131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:15:23.994831  279556 ssh_runner.go:195] Run: crio config
	I1025 09:15:24.046767  279556 cni.go:84] Creating CNI manager for ""
	I1025 09:15:24.046790  279556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:24.046811  279556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:15:24.046837  279556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-687131 NodeName:auto-687131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:15:24.046988  279556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-687131"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:15:24.047063  279556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:15:24.055111  279556 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:15:24.055172  279556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:15:24.063035  279556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 09:15:24.076837  279556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:15:24.094395  279556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1025 09:15:24.107726  279556 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:15:24.112067  279556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:24.122709  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:24.208028  279556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:24.236216  279556 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131 for IP: 192.168.85.2
	I1025 09:15:24.236238  279556 certs.go:195] generating shared ca certs ...
	I1025 09:15:24.236256  279556 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.236434  279556 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:15:24.236488  279556 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:15:24.236501  279556 certs.go:257] generating profile certs ...
	I1025 09:15:24.236564  279556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key
	I1025 09:15:24.236581  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt with IP's: []
	I1025 09:15:24.928992  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt ...
	I1025 09:15:24.929020  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.crt: {Name:mk779bd9fdf8eaa5918f81c459f798815b970211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.929218  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key ...
	I1025 09:15:24.929242  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/client.key: {Name:mk46972b19f1fd85299d3aff68dfc355ea581ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:24.929386  279556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded
	I1025 09:15:24.929408  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:15:25.370687  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded ...
	I1025 09:15:25.370717  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded: {Name:mk758bb25e73fe6bee588c76326f09382b8c326f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.370874  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded ...
	I1025 09:15:25.370888  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded: {Name:mk7ceb126fbb04a31aaba790cb04f339aa54e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.370958  279556 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt.25516ded -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt
	I1025 09:15:25.371030  279556 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key.25516ded -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key
	I1025 09:15:25.371087  279556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key
	I1025 09:15:25.371102  279556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt with IP's: []
	I1025 09:15:25.463911  279556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt ...
	I1025 09:15:25.463935  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt: {Name:mk4787cbad8c90eaac31b2526653c5fcc02d8be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.464075  279556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key ...
	I1025 09:15:25.464086  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key: {Name:mk2a6539101452dd3e491062dcc240c2c53ba421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:25.464280  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:15:25.464315  279556 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:15:25.464324  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:15:25.464345  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:15:25.464370  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:15:25.464393  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:15:25.464431  279556 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:25.464974  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:15:25.483378  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:15:25.501823  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:15:25.520038  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:15:25.539492  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 09:15:25.558370  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:15:22.440164  279928 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-687131:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.978689163s)
	I1025 09:15:22.440203  279928 kic.go:203] duration metric: took 4.978845546s to extract preloaded images to volume ...
	W1025 09:15:22.440286  279928 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:15:22.440329  279928 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:15:22.440367  279928 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:15:22.506269  279928 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-687131 --name kindnet-687131 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-687131 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-687131 --network kindnet-687131 --ip 192.168.103.2 --volume kindnet-687131:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:15:22.799105  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Running}}
	I1025 09:15:22.820206  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:22.842929  279928 cli_runner.go:164] Run: docker exec kindnet-687131 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:15:22.892617  279928 oci.go:144] the created container "kindnet-687131" has a running status.
	I1025 09:15:22.892659  279928 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa...
	I1025 09:15:23.014325  279928 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:15:23.049009  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:23.070457  279928 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:15:23.070500  279928 kic_runner.go:114] Args: [docker exec --privileged kindnet-687131 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:15:23.134243  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:23.158093  279928 machine.go:93] provisionDockerMachine start ...
	I1025 09:15:23.158226  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.181058  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.181403  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.181428  279928 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:15:23.331938  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-687131
	
	I1025 09:15:23.331970  279928 ubuntu.go:182] provisioning hostname "kindnet-687131"
	I1025 09:15:23.332035  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.353853  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.354132  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.354153  279928 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-687131 && echo "kindnet-687131" | sudo tee /etc/hostname
	I1025 09:15:23.515310  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-687131
	
	I1025 09:15:23.515394  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.537215  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:23.537527  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:23.537560  279928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-687131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-687131/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-687131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:15:23.688101  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:15:23.688132  279928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5966/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5966/.minikube}
	I1025 09:15:23.688166  279928 ubuntu.go:190] setting up certificates
	I1025 09:15:23.688179  279928 provision.go:84] configureAuth start
	I1025 09:15:23.688244  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:23.709237  279928 provision.go:143] copyHostCerts
	I1025 09:15:23.709298  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem, removing ...
	I1025 09:15:23.709318  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem
	I1025 09:15:23.709404  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/ca.pem (1078 bytes)
	I1025 09:15:23.709548  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem, removing ...
	I1025 09:15:23.709565  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem
	I1025 09:15:23.709612  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/cert.pem (1123 bytes)
	I1025 09:15:23.709727  279928 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem, removing ...
	I1025 09:15:23.709739  279928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem
	I1025 09:15:23.709774  279928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5966/.minikube/key.pem (1675 bytes)
	I1025 09:15:23.709864  279928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem org=jenkins.kindnet-687131 san=[127.0.0.1 192.168.103.2 kindnet-687131 localhost minikube]
	I1025 09:15:23.878508  279928 provision.go:177] copyRemoteCerts
	I1025 09:15:23.878559  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:15:23.878599  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:23.900441  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.009301  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:15:24.031121  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:15:24.051157  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1025 09:15:24.069764  279928 provision.go:87] duration metric: took 381.568636ms to configureAuth
	I1025 09:15:24.069798  279928 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:15:24.069969  279928 config.go:182] Loaded profile config "kindnet-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:24.070073  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.091045  279928 main.go:141] libmachine: Using SSH client type: native
	I1025 09:15:24.091297  279928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1025 09:15:24.091319  279928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:15:24.366841  279928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:15:24.366868  279928 machine.go:96] duration metric: took 1.208744926s to provisionDockerMachine
	I1025 09:15:24.366878  279928 client.go:171] duration metric: took 8.120898239s to LocalClient.Create
	I1025 09:15:24.366903  279928 start.go:167] duration metric: took 8.120973439s to libmachine.API.Create "kindnet-687131"
	I1025 09:15:24.366916  279928 start.go:293] postStartSetup for "kindnet-687131" (driver="docker")
	I1025 09:15:24.366927  279928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:15:24.366989  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:15:24.367022  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.386435  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.490100  279928 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:15:24.493952  279928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:15:24.493982  279928 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:15:24.493997  279928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/addons for local assets ...
	I1025 09:15:24.494064  279928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5966/.minikube/files for local assets ...
	I1025 09:15:24.494174  279928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem -> 94732.pem in /etc/ssl/certs
	I1025 09:15:24.494310  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:15:24.502630  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:24.524399  279928 start.go:296] duration metric: took 157.46682ms for postStartSetup
	I1025 09:15:24.524816  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:24.543897  279928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/config.json ...
	I1025 09:15:24.544201  279928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:15:24.544248  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.562392  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.660938  279928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:15:24.666060  279928 start.go:128] duration metric: took 8.422763522s to createHost
	I1025 09:15:24.666089  279928 start.go:83] releasing machines lock for "kindnet-687131", held for 8.422948298s
	I1025 09:15:24.666161  279928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-687131
	I1025 09:15:24.686558  279928 ssh_runner.go:195] Run: cat /version.json
	I1025 09:15:24.686619  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.686618  279928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:15:24.686694  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:24.707640  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.707737  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:24.805204  279928 ssh_runner.go:195] Run: systemctl --version
	I1025 09:15:24.861031  279928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:15:24.899252  279928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:15:24.904135  279928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:15:24.904213  279928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:15:24.931204  279928 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:15:24.931225  279928 start.go:495] detecting cgroup driver to use...
	I1025 09:15:24.931256  279928 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:15:24.931299  279928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:15:24.948666  279928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:15:24.962055  279928 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:15:24.962115  279928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:15:24.980169  279928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:15:24.998963  279928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:15:25.096394  279928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:15:25.188449  279928 docker.go:234] disabling docker service ...
	I1025 09:15:25.188539  279928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:15:25.207995  279928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:15:25.222036  279928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:15:25.319414  279928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:15:25.412233  279928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:15:25.425899  279928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:15:25.441635  279928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:15:25.441709  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.453116  279928 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:15:25.453188  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.462464  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.471732  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.480919  279928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:15:25.490188  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.499310  279928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.514357  279928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:15:25.523846  279928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:15:25.532211  279928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:15:25.540303  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:25.626699  279928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:15:25.739482  279928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:15:25.739551  279928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:15:25.743863  279928 start.go:563] Will wait 60s for crictl version
	I1025 09:15:25.743922  279928 ssh_runner.go:195] Run: which crictl
	I1025 09:15:25.747790  279928 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:15:25.774761  279928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:15:25.774855  279928 ssh_runner.go:195] Run: crio --version
	I1025 09:15:25.809624  279928 ssh_runner.go:195] Run: crio --version
	I1025 09:15:25.841924  279928 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:15:25.843191  279928 cli_runner.go:164] Run: docker network inspect kindnet-687131 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:15:25.860519  279928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:15:25.864742  279928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:25.875509  279928 kubeadm.go:883] updating cluster {Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:15:25.875665  279928 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:25.875729  279928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:25.913484  279928 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:25.913504  279928 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:15:25.913547  279928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:15:25.943471  279928 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:15:25.943492  279928 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:15:25.943500  279928 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:15:25.943574  279928 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-687131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1025 09:15:25.943633  279928 ssh_runner.go:195] Run: crio config
	I1025 09:15:25.993112  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:25.993145  279928 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:15:25.993184  279928 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-687131 NodeName:kindnet-687131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:15:25.993331  279928 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-687131"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:15:25.993383  279928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:15:26.002245  279928 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:15:26.002313  279928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:15:26.010918  279928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1025 09:15:25.584752  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:15:25.603272  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/auto-687131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:15:25.622186  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:15:25.643606  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:15:25.661670  279556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:15:25.680760  279556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:15:25.695381  279556 ssh_runner.go:195] Run: openssl version
	I1025 09:15:25.701872  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:15:25.711383  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.715855  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.715916  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:15:25.753328  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:15:25.762817  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:15:25.773811  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.778344  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.778413  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:25.821755  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:15:25.831598  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:15:25.840888  279556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.845139  279556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.845193  279556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:15:25.882755  279556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:15:25.894652  279556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:15:25.898800  279556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:15:25.898865  279556 kubeadm.go:400] StartCluster: {Name:auto-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:25.898959  279556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:15:25.899034  279556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:15:25.930732  279556 cri.go:89] found id: ""
	I1025 09:15:25.930809  279556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:15:25.940722  279556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:15:25.949522  279556 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:15:25.949590  279556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:15:25.958156  279556 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:15:25.958183  279556 kubeadm.go:157] found existing configuration files:
	
	I1025 09:15:25.958235  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:15:25.967172  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:15:25.967254  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:15:25.976067  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:15:25.984242  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:15:25.984302  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:15:25.993016  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:15:26.002372  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:15:26.002430  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:15:26.010747  279556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:15:26.018587  279556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:15:26.018650  279556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:15:26.026625  279556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:15:26.066622  279556 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:15:26.066753  279556 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:15:26.087508  279556 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:15:26.087610  279556 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:15:26.087697  279556 kubeadm.go:318] OS: Linux
	I1025 09:15:26.087754  279556 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:15:26.087834  279556 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:15:26.087912  279556 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:15:26.088003  279556 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:15:26.088089  279556 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:15:26.088182  279556 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:15:26.088238  279556 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:15:26.088292  279556 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:15:26.159998  279556 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:15:26.160173  279556 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:15:26.160349  279556 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:15:26.168799  279556 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:15:26.024244  279928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:15:26.039712  279928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 09:15:26.054486  279928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:15:26.058574  279928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:15:26.069835  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:26.162803  279928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:26.190619  279928 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131 for IP: 192.168.103.2
	I1025 09:15:26.190663  279928 certs.go:195] generating shared ca certs ...
	I1025 09:15:26.190687  279928 certs.go:227] acquiring lock for ca certs: {Name:mkfe6a476f2b80503d0332bb98cd9ba9e323116b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.190849  279928 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key
	I1025 09:15:26.190912  279928 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key
	I1025 09:15:26.190926  279928 certs.go:257] generating profile certs ...
	I1025 09:15:26.190998  279928 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key
	I1025 09:15:26.191017  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt with IP's: []
	I1025 09:15:26.219280  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt ...
	I1025 09:15:26.219307  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.crt: {Name:mk42146df35f32426a420017cd45ab46d2df2c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.219512  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key ...
	I1025 09:15:26.219526  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/client.key: {Name:mka29965ab108f0e622f83908536f26ef739d604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.219659  279928 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2
	I1025 09:15:26.219684  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:15:26.329319  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 ...
	I1025 09:15:26.329363  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2: {Name:mk046cb06650a4e0f6d7e42c28f3d48d22d4b0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.329540  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2 ...
	I1025 09:15:26.329554  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2: {Name:mk530378837f592628c77d98032c76a4244f4436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.329625  279928 certs.go:382] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt.b70821b2 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt
	I1025 09:15:26.329742  279928 certs.go:386] copying /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key.b70821b2 -> /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key
	I1025 09:15:26.329805  279928 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key
	I1025 09:15:26.329820  279928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt with IP's: []
	I1025 09:15:26.735246  279928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt ...
	I1025 09:15:26.735276  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt: {Name:mk782fc69db18d88753465cefca07ee61999cf6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.735488  279928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key ...
	I1025 09:15:26.735505  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key: {Name:mkf8bb93af2e3d11ccf0ab894717b994adb063f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:26.735728  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem (1338 bytes)
	W1025 09:15:26.735765  279928 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473_empty.pem, impossibly tiny 0 bytes
	I1025 09:15:26.735772  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:15:26.735795  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:15:26.735827  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:15:26.735849  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/certs/key.pem (1675 bytes)
	I1025 09:15:26.735888  279928 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem (1708 bytes)
	I1025 09:15:26.736421  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:15:26.755820  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:15:26.774147  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:15:26.792920  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:15:26.811477  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:15:26.830199  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:15:26.848867  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:15:26.868102  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/kindnet-687131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:15:26.887910  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/certs/9473.pem --> /usr/share/ca-certificates/9473.pem (1338 bytes)
	I1025 09:15:26.909492  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/ssl/certs/94732.pem --> /usr/share/ca-certificates/94732.pem (1708 bytes)
	I1025 09:15:26.927490  279928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:15:26.944979  279928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:15:26.958195  279928 ssh_runner.go:195] Run: openssl version
	I1025 09:15:26.965063  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:15:26.974977  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:26.979031  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:26.979097  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:15:27.018091  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:15:27.028873  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9473.pem && ln -fs /usr/share/ca-certificates/9473.pem /etc/ssl/certs/9473.pem"
	I1025 09:15:27.038925  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.043775  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:35 /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.043852  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9473.pem
	I1025 09:15:27.081032  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9473.pem /etc/ssl/certs/51391683.0"
	I1025 09:15:27.090311  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94732.pem && ln -fs /usr/share/ca-certificates/94732.pem /etc/ssl/certs/94732.pem"
	I1025 09:15:27.099410  279928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.103356  279928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:35 /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.103409  279928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94732.pem
	I1025 09:15:27.140571  279928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94732.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:15:27.149701  279928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:15:27.153665  279928 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:15:27.153732  279928 kubeadm.go:400] StartCluster: {Name:kindnet-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:27.153809  279928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:15:27.153884  279928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:15:27.183161  279928 cri.go:89] found id: ""
	I1025 09:15:27.183234  279928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:15:27.191544  279928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:15:27.200214  279928 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:15:27.200290  279928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:15:27.208454  279928 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:15:27.208475  279928 kubeadm.go:157] found existing configuration files:
	
	I1025 09:15:27.208526  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:15:27.217396  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:15:27.217456  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:15:27.225670  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:15:27.236151  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:15:27.236214  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:15:27.245161  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:15:27.254460  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:15:27.254531  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:15:27.264877  279928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:15:27.274289  279928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:15:27.274375  279928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:15:27.284912  279928 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:15:27.328789  279928 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:15:27.328867  279928 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:15:27.351178  279928 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:15:27.351294  279928 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:15:27.351391  279928 kubeadm.go:318] OS: Linux
	I1025 09:15:27.351484  279928 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:15:27.351562  279928 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:15:27.351632  279928 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:15:27.351718  279928 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:15:27.351793  279928 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:15:27.351868  279928 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:15:27.351932  279928 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:15:27.351988  279928 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:15:27.422485  279928 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:15:27.422668  279928 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:15:27.422808  279928 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:15:27.430489  279928 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 09:15:24.889176  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	W1025 09:15:26.889579  268581 pod_ready.go:104] pod "coredns-66bc5c9577-72zpn" is not "Ready", error: <nil>
	I1025 09:15:28.388946  268581 pod_ready.go:94] pod "coredns-66bc5c9577-72zpn" is "Ready"
	I1025 09:15:28.388977  268581 pod_ready.go:86] duration metric: took 37.505736505s for pod "coredns-66bc5c9577-72zpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.392090  268581 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.397093  268581 pod_ready.go:94] pod "etcd-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.397132  268581 pod_ready.go:86] duration metric: took 5.011857ms for pod "etcd-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.399595  268581 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.403894  268581 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.403922  268581 pod_ready.go:86] duration metric: took 4.302014ms for pod "kube-apiserver-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.406153  268581 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.587570  268581 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:28.587597  268581 pod_ready.go:86] duration metric: took 181.422256ms for pod "kube-controller-manager-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:28.787005  268581 pod_ready.go:83] waiting for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.187351  268581 pod_ready.go:94] pod "kube-proxy-rmqbr" is "Ready"
	I1025 09:15:29.187384  268581 pod_ready.go:86] duration metric: took 400.350279ms for pod "kube-proxy-rmqbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.387388  268581 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.787121  268581 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-891466" is "Ready"
	I1025 09:15:29.787150  268581 pod_ready.go:86] duration metric: took 399.732519ms for pod "kube-scheduler-default-k8s-diff-port-891466" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:15:29.787164  268581 pod_ready.go:40] duration metric: took 38.908438746s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:15:29.833272  268581 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:15:29.837751  268581 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-891466" cluster and "default" namespace by default
	I1025 09:15:26.172422  279556 out.go:252]   - Generating certificates and keys ...
	I1025 09:15:26.172535  279556 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:15:26.172634  279556 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:15:26.285628  279556 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:15:26.713013  279556 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:15:27.071494  279556 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:15:27.179216  279556 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:15:27.221118  279556 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:15:27.221288  279556 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-687131 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:15:27.928204  279556 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:15:27.928373  279556 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-687131 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:15:28.068848  279556 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:15:28.204926  279556 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:15:28.440284  279556 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:15:28.440376  279556 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:15:28.579490  279556 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:15:28.909219  279556 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:15:29.245788  279556 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:15:29.318242  279556 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:15:29.914745  279556 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:15:29.915521  279556 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:15:29.920405  279556 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:15:29.923766  279556 out.go:252]   - Booting up control plane ...
	I1025 09:15:29.923896  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:15:29.924007  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:15:29.924130  279556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:15:29.938834  279556 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:15:29.938992  279556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:15:29.947531  279556 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:15:29.947860  279556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:15:29.947903  279556 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:15:30.066710  279556 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:15:30.066882  279556 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:15:27.433789  279928 out.go:252]   - Generating certificates and keys ...
	I1025 09:15:27.433905  279928 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:15:27.434019  279928 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:15:27.635226  279928 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:15:28.010533  279928 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:15:28.223358  279928 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:15:28.339793  279928 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:15:28.504635  279928 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:15:28.504813  279928 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-687131 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:15:28.673200  279928 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:15:28.673381  279928 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-687131 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:15:28.779444  279928 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:15:28.943425  279928 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:15:29.037026  279928 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:15:29.037226  279928 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:15:29.100058  279928 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:15:29.360945  279928 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:15:29.761516  279928 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:15:30.697334  279928 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:15:30.927462  279928 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:15:30.928032  279928 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:15:30.933234  279928 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:15:30.936503  279928 out.go:252]   - Booting up control plane ...
	I1025 09:15:30.936633  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:15:30.936762  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:15:30.936842  279928 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:15:30.949721  279928 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:15:30.949850  279928 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:15:30.956514  279928 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:15:30.956751  279928 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:15:30.956797  279928 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:15:31.067780  279556 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001155245s
	I1025 09:15:31.071443  279556 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:15:31.071574  279556 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:15:31.071722  279556 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:15:31.071865  279556 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:15:32.114667  279556 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.043109229s
	I1025 09:15:33.596604  279556 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.525171723s
	I1025 09:15:35.073741  279556 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00221941s
	I1025 09:15:35.087030  279556 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:15:35.099234  279556 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:15:35.109692  279556 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:15:35.109931  279556 kubeadm.go:318] [mark-control-plane] Marking the node auto-687131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:15:35.119543  279556 kubeadm.go:318] [bootstrap-token] Using token: ds09vj.7po14nmutnpjjt8b
	I1025 09:15:35.121198  279556 out.go:252]   - Configuring RBAC rules ...
	I1025 09:15:35.121342  279556 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:15:35.126177  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:15:35.134866  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:15:35.137736  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:15:35.140350  279556 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:15:35.144165  279556 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:15:35.479861  279556 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:15:31.054706  279928 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:15:31.054857  279928 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:15:32.055728  279928 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001030932s
	I1025 09:15:32.059944  279928 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:15:32.060054  279928 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1025 09:15:32.060171  279928 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:15:32.060273  279928 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:15:33.205829  279928 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.145837798s
	I1025 09:15:33.879959  279928 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.819970792s
	I1025 09:15:35.561861  279928 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501854256s
	I1025 09:15:35.574015  279928 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:15:35.585708  279928 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:15:35.595437  279928 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:15:35.595789  279928 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-687131 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:15:35.605853  279928 kubeadm.go:318] [bootstrap-token] Using token: a4kf7c.mn4eyqkotrnz0x3q
	I1025 09:15:35.607340  279928 out.go:252]   - Configuring RBAC rules ...
	I1025 09:15:35.607488  279928 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:15:35.611019  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:15:35.617043  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:15:35.619701  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:15:35.623283  279928 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:15:35.625946  279928 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:15:35.967831  279928 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:15:35.901156  279556 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:15:36.480801  279556 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:15:36.481768  279556 kubeadm.go:318] 
	I1025 09:15:36.481872  279556 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:15:36.481883  279556 kubeadm.go:318] 
	I1025 09:15:36.481998  279556 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:15:36.482009  279556 kubeadm.go:318] 
	I1025 09:15:36.482046  279556 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:15:36.482134  279556 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:15:36.482232  279556 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:15:36.482255  279556 kubeadm.go:318] 
	I1025 09:15:36.482334  279556 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:15:36.482344  279556 kubeadm.go:318] 
	I1025 09:15:36.482421  279556 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:15:36.482432  279556 kubeadm.go:318] 
	I1025 09:15:36.482511  279556 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:15:36.482606  279556 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:15:36.482743  279556 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:15:36.482756  279556 kubeadm.go:318] 
	I1025 09:15:36.482883  279556 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:15:36.482995  279556 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:15:36.483005  279556 kubeadm.go:318] 
	I1025 09:15:36.483113  279556 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ds09vj.7po14nmutnpjjt8b \
	I1025 09:15:36.483287  279556 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:15:36.483321  279556 kubeadm.go:318] 	--control-plane 
	I1025 09:15:36.483329  279556 kubeadm.go:318] 
	I1025 09:15:36.483475  279556 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:15:36.483490  279556 kubeadm.go:318] 
	I1025 09:15:36.483608  279556 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ds09vj.7po14nmutnpjjt8b \
	I1025 09:15:36.483813  279556 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:15:36.486803  279556 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:15:36.486932  279556 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:15:36.486981  279556 cni.go:84] Creating CNI manager for ""
	I1025 09:15:36.486999  279556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:15:36.488810  279556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:15:36.386755  279928 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:15:36.968068  279928 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:15:36.969126  279928 kubeadm.go:318] 
	I1025 09:15:36.969223  279928 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:15:36.969233  279928 kubeadm.go:318] 
	I1025 09:15:36.969328  279928 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:15:36.969337  279928 kubeadm.go:318] 
	I1025 09:15:36.969387  279928 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:15:36.969446  279928 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:15:36.969488  279928 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:15:36.969504  279928 kubeadm.go:318] 
	I1025 09:15:36.969598  279928 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:15:36.969608  279928 kubeadm.go:318] 
	I1025 09:15:36.969716  279928 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:15:36.969725  279928 kubeadm.go:318] 
	I1025 09:15:36.969769  279928 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:15:36.969873  279928 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:15:36.969975  279928 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:15:36.969984  279928 kubeadm.go:318] 
	I1025 09:15:36.970083  279928 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:15:36.970215  279928 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:15:36.970235  279928 kubeadm.go:318] 
	I1025 09:15:36.970345  279928 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a4kf7c.mn4eyqkotrnz0x3q \
	I1025 09:15:36.970489  279928 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 \
	I1025 09:15:36.970537  279928 kubeadm.go:318] 	--control-plane 
	I1025 09:15:36.970556  279928 kubeadm.go:318] 
	I1025 09:15:36.970702  279928 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:15:36.970713  279928 kubeadm.go:318] 
	I1025 09:15:36.970813  279928 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a4kf7c.mn4eyqkotrnz0x3q \
	I1025 09:15:36.970967  279928 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:2df46bcf1155af94bc1cd72f6326f93f95c4699dd97ade0c6bf259b16e267fd2 
	I1025 09:15:36.973483  279928 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:15:36.973617  279928 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:15:36.973668  279928 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:15:36.975438  279928 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:15:36.490181  279556 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:15:36.494821  279556 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:15:36.494840  279556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:15:36.509308  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:15:36.754860  279556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:15:36.754998  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:36.755095  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-687131 minikube.k8s.io/updated_at=2025_10_25T09_15_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=auto-687131 minikube.k8s.io/primary=true
	I1025 09:15:36.779129  279556 ops.go:34] apiserver oom_adj: -16
	I1025 09:15:36.860306  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:37.360419  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:37.860734  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:38.360571  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:38.861452  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:39.360443  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:39.860733  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:40.360346  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:36.976779  279928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:15:36.981516  279928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:15:36.981532  279928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:15:36.995153  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:15:37.213847  279928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:15:37.213951  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:37.213989  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-687131 minikube.k8s.io/updated_at=2025_10_25T09_15_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=kindnet-687131 minikube.k8s.io/primary=true
	I1025 09:15:37.226912  279928 ops.go:34] apiserver oom_adj: -16
	I1025 09:15:37.307915  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:37.808755  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:38.308702  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:38.807969  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:39.308213  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:39.808512  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:40.308274  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:40.808766  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:40.861304  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:41.360968  279556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:41.439495  279556 kubeadm.go:1113] duration metric: took 4.684536977s to wait for elevateKubeSystemPrivileges
	I1025 09:15:41.439530  279556 kubeadm.go:402] duration metric: took 15.540670525s to StartCluster
	I1025 09:15:41.439547  279556 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:41.439630  279556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:41.441958  279556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:41.442303  279556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:15:41.442307  279556 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:15:41.442343  279556 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:15:41.442453  279556 addons.go:69] Setting storage-provisioner=true in profile "auto-687131"
	I1025 09:15:41.442471  279556 addons.go:238] Setting addon storage-provisioner=true in "auto-687131"
	I1025 09:15:41.442503  279556 host.go:66] Checking if "auto-687131" exists ...
	I1025 09:15:41.442559  279556 addons.go:69] Setting default-storageclass=true in profile "auto-687131"
	I1025 09:15:41.442615  279556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-687131"
	I1025 09:15:41.442561  279556 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:41.442937  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:41.443061  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:41.445409  279556 out.go:179] * Verifying Kubernetes components...
	I1025 09:15:41.449465  279556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:41.472044  279556 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:15:41.308838  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:41.807958  279928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:15:41.897798  279928 kubeadm.go:1113] duration metric: took 4.683906305s to wait for elevateKubeSystemPrivileges
	I1025 09:15:41.897840  279928 kubeadm.go:402] duration metric: took 14.744111963s to StartCluster
	I1025 09:15:41.897863  279928 settings.go:142] acquiring lock: {Name:mk4756e33019ec52979178f46e632036d5d948eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:41.897939  279928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:41.899948  279928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/kubeconfig: {Name:mka3aa7713222bea415f380719b2854907fc8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:41.900401  279928 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:15:41.900426  279928 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:15:41.900503  279928 addons.go:69] Setting storage-provisioner=true in profile "kindnet-687131"
	I1025 09:15:41.900414  279928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:15:41.900689  279928 config.go:182] Loaded profile config "kindnet-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:41.900526  279928 addons.go:238] Setting addon storage-provisioner=true in "kindnet-687131"
	I1025 09:15:41.900790  279928 host.go:66] Checking if "kindnet-687131" exists ...
	I1025 09:15:41.900529  279928 addons.go:69] Setting default-storageclass=true in profile "kindnet-687131"
	I1025 09:15:41.901281  279928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-687131"
	I1025 09:15:41.901338  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:41.901846  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:41.901910  279928 out.go:179] * Verifying Kubernetes components...
	I1025 09:15:41.903480  279928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:15:41.938767  279928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:15:41.472922  279556 addons.go:238] Setting addon default-storageclass=true in "auto-687131"
	I1025 09:15:41.472967  279556 host.go:66] Checking if "auto-687131" exists ...
	I1025 09:15:41.473438  279556 cli_runner.go:164] Run: docker container inspect auto-687131 --format={{.State.Status}}
	I1025 09:15:41.473536  279556 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:15:41.473554  279556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:15:41.473618  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:41.509409  279556 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:15:41.509438  279556 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:15:41.509504  279556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-687131
	I1025 09:15:41.510310  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:41.536498  279556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/auto-687131/id_rsa Username:docker}
	I1025 09:15:41.558841  279556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:15:41.612575  279556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:41.642060  279556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:15:41.665210  279556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:15:41.780982  279556 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 09:15:41.782507  279556 node_ready.go:35] waiting up to 15m0s for node "auto-687131" to be "Ready" ...
	I1025 09:15:42.090057  279556 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:15:41.940245  279928 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:15:41.940263  279928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:15:41.940323  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:41.944949  279928 addons.go:238] Setting addon default-storageclass=true in "kindnet-687131"
	I1025 09:15:41.945009  279928 host.go:66] Checking if "kindnet-687131" exists ...
	I1025 09:15:41.945519  279928 cli_runner.go:164] Run: docker container inspect kindnet-687131 --format={{.State.Status}}
	I1025 09:15:41.977204  279928 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:15:41.977248  279928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:15:41.977340  279928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687131
	I1025 09:15:41.979772  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:42.011955  279928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/kindnet-687131/id_rsa Username:docker}
	I1025 09:15:42.058564  279928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:15:42.102846  279928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:15:42.156783  279928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:15:42.183018  279928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:15:42.303495  279928 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 09:15:42.305828  279928 node_ready.go:35] waiting up to 15m0s for node "kindnet-687131" to be "Ready" ...
	I1025 09:15:42.580202  279928 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 25 09:15:01 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:01.387859446Z" level=info msg="Started container" PID=1734 containerID=567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper id=6dbcb23b-3d0e-4663-8550-81c94463f504 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94a0a98e93bc9598898529a6b26b6e3ae0eacefce67d9bdabdce8c3cc8e5719c
	Oct 25 09:15:02 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:02.320832838Z" level=info msg="Removing container: 1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169" id=c92cd343-1a71-478c-8819-aa951d813d50 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:02 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:02.341634698Z" level=info msg="Removed container 1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=c92cd343-1a71-478c-8819-aa951d813d50 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.373938228Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=19c60099-dbd4-4f23-93f6-08ecf17beb16 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.426893359Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fc1dea78-ecf4-4e8f-8562-769316e6a98a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.428256909Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=68eb750b-6fc9-4a46-97c1-8b1a7bbed97c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.428405336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.447600175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.447846602Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/de45823f146340e9fa893d1e143a15212471233de5d12c27331b2182e4a86596/merged/etc/passwd: no such file or directory"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.447886398Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/de45823f146340e9fa893d1e143a15212471233de5d12c27331b2182e4a86596/merged/etc/group: no such file or directory"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.448215773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.644937831Z" level=info msg="Created container a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d: kube-system/storage-provisioner/storage-provisioner" id=68eb750b-6fc9-4a46-97c1-8b1a7bbed97c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.645792744Z" level=info msg="Starting container: a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d" id=17ec7134-8858-444c-a492-1c76e3b31aae name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.648250521Z" level=info msg="Started container" PID=1748 containerID=a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d description=kube-system/storage-provisioner/storage-provisioner id=17ec7134-8858-444c-a492-1c76e3b31aae name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1ddb17fba4bb43ad7641c1336a5361dfcf55ca1f23d4fb74295e1a0e16e87fc
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.224382474Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2e7acfb4-5e7c-4cbb-b7f6-2bad0b8c175f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.225497077Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=639cb57b-86a4-4f25-be63-de026345119c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.226788068Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=952fd224-2270-472e-9dac-6a8244b8b634 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.226928864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.232982527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.233501635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.269329189Z" level=info msg="Created container c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=952fd224-2270-472e-9dac-6a8244b8b634 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.270144661Z" level=info msg="Starting container: c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3" id=7c9f6499-c3e7-43ff-b2e3-40bba95c6fbe name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.273974224Z" level=info msg="Started container" PID=1764 containerID=c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper id=7c9f6499-c3e7-43ff-b2e3-40bba95c6fbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=94a0a98e93bc9598898529a6b26b6e3ae0eacefce67d9bdabdce8c3cc8e5719c
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.391534917Z" level=info msg="Removing container: 567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2" id=ff7792d9-f4fb-4a56-82ec-b31e749307df name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.404825018Z" level=info msg="Removed container 567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=ff7792d9-f4fb-4a56-82ec-b31e749307df name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c403ec41066f5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   94a0a98e93bc9       dashboard-metrics-scraper-6ffb444bf9-8247d             kubernetes-dashboard
	a53aff721e253       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   f1ddb17fba4bb       storage-provisioner                                    kube-system
	cbc2c58c4b15c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   ab159ab39d4d7       kubernetes-dashboard-855c9754f9-lrnt4                  kubernetes-dashboard
	ab827ee753758       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   e3ee2159b7ee4       coredns-66bc5c9577-72zpn                               kube-system
	75b84bcae614a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   ea57a139889e5       busybox                                                default
	2198288514e04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   f1ddb17fba4bb       storage-provisioner                                    kube-system
	e1ab809e55dad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   4b7eb5b906aec       kindnet-9xc2z                                          kube-system
	2315c753ecdae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   4a65801e61542       kube-proxy-rmqbr                                       kube-system
	0b42736720451       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   8d6f42b6be632       etcd-default-k8s-diff-port-891466                      kube-system
	e554bff30a142       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   0e102f3be2fe9       kube-apiserver-default-k8s-diff-port-891466            kube-system
	fd90aba509870       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   7dfa8ac6333ee       kube-scheduler-default-k8s-diff-port-891466            kube-system
	ad9cca7cd898c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   fbfed94972ec2       kube-controller-manager-default-k8s-diff-port-891466   kube-system
	
	
	==> coredns [ab827ee7537580129a5443a427008b45db6bea12d0e1320adb16f5314fd100da] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54801 - 11762 "HINFO IN 6181491788219434077.8397239723996058082. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084392335s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-891466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-891466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=default-k8s-diff-port-891466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_13_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-891466
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:15:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-891466
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2fa36a04-64f2-4ad6-99cd-8fd412b795ce
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-72zpn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-891466                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-9xc2z                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-891466             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-891466    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-rmqbr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-891466             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8247d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lrnt4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-891466 event: Registered Node default-k8s-diff-port-891466 in Controller
	  Normal  NodeReady                100s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-891466 event: Registered Node default-k8s-diff-port-891466 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [0b4273672045197aa9930a7861b7ea9c702bee1c1761abe1fac0ba82696ba0bb] <==
	{"level":"warn","ts":"2025-10-25T09:14:48.579222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.589412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.599878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.615957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.627694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.635487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.643368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.650569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.658089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.665121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.672611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.680613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.688821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.696051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.703448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.710005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.717391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.723889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.730758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.738479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.753838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.761364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.769251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.824558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:15:20.297530Z","caller":"traceutil/trace.go:172","msg":"trace[133654933] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"120.31102ms","start":"2025-10-25T09:15:20.177200Z","end":"2025-10-25T09:15:20.297511Z","steps":["trace[133654933] 'process raft request'  (duration: 120.175163ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:15:46 up 58 min,  0 user,  load average: 4.50, 3.43, 2.34
	Linux default-k8s-diff-port-891466 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e1ab809e55dad3c3b367621a2d2b4a7a079dcbfc73c1c5023db8aeba72f7c648] <==
	I1025 09:14:50.781339       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:14:50.781628       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:14:50.781824       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:14:50.781843       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:14:50.781868       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:14:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:14:51.075188       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:14:51.076000       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:14:51.076055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:14:51.076633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:14:51.676498       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:14:51.676519       1 metrics.go:72] Registering metrics
	I1025 09:14:51.676569       1 controller.go:711] "Syncing nftables rules"
	I1025 09:15:01.076721       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:01.076780       1 main.go:301] handling current node
	I1025 09:15:11.079809       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:11.079854       1 main.go:301] handling current node
	I1025 09:15:21.075335       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:21.075376       1 main.go:301] handling current node
	I1025 09:15:31.076774       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:31.076940       1 main.go:301] handling current node
	I1025 09:15:41.075037       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:41.075068       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e554bff30a14261e8aba9d0b797b3aa317f80c74e0ea6c81ce9fc3a7956a1e40] <==
	I1025 09:14:49.364595       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:14:49.364066       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:14:49.364572       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:14:49.364100       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:14:49.364903       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:14:49.365883       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:14:49.365943       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:14:49.365957       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:14:49.365964       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:14:49.365971       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:14:49.364058       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1025 09:14:49.374140       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:14:49.374555       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:14:49.391484       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:14:49.634086       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:14:49.662827       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:14:49.681251       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:14:49.688791       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:14:49.696624       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:14:49.731583       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.52.10"}
	I1025 09:14:49.742598       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.140.65"}
	I1025 09:14:50.268339       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:14:52.870366       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:14:53.115600       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:14:53.342420       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ad9cca7cd898cabfdf3a0ac2e99271e2139eef9d4a535d762fe568acfcd007ea] <==
	I1025 09:14:52.713016       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:14:52.713021       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:14:52.713047       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:14:52.713559       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:14:52.715403       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:14:52.715572       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:14:52.717897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:14:52.720199       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:14:52.722513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:52.723609       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:14:52.723659       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:52.725849       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:14:52.729201       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:14:52.731467       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:14:52.731668       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:14:52.731685       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:14:52.731695       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:14:52.733434       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:14:52.736114       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:14:52.739243       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:14:52.741534       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:14:52.741618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:14:52.742952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:14:52.743047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:14:52.745322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2315c753ecdae32bd3c2309c84279ae635e349a3bd022e9ca8e253e5ad725ccb] <==
	I1025 09:14:50.634681       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:14:50.709123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:14:50.809583       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:14:50.809617       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:14:50.809709       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:14:50.829327       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:14:50.829386       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:14:50.834571       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:14:50.835101       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:14:50.835126       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:50.838565       1 config.go:200] "Starting service config controller"
	I1025 09:14:50.838602       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:14:50.838568       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:14:50.838580       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:14:50.838670       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:14:50.838678       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:14:50.838691       1 config.go:309] "Starting node config controller"
	I1025 09:14:50.838696       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:14:50.838703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:14:50.939515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:14:50.939531       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:14:50.939575       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fd90aba5098707e9b4565da4efbbb072612744bbe8babcb4796b4df48b81c1bc] <==
	I1025 09:14:47.487538       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:14:49.290591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:14:49.290630       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1025 09:14:49.290668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:14:49.290679       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:14:49.333382       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:14:49.333490       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:49.336351       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:49.336902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:49.336751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:14:49.336774       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:14:49.437186       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433256     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngzp\" (UniqueName: \"kubernetes.io/projected/5487b0fb-f7ad-42a5-a997-370f65e11e5e-kube-api-access-rngzp\") pod \"dashboard-metrics-scraper-6ffb444bf9-8247d\" (UID: \"5487b0fb-f7ad-42a5-a997-370f65e11e5e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d"
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433308     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5487b0fb-f7ad-42a5-a997-370f65e11e5e-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8247d\" (UID: \"5487b0fb-f7ad-42a5-a997-370f65e11e5e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d"
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433369     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d5f7c60e-ee23-40f4-a54a-e65c20dd7009-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lrnt4\" (UID: \"d5f7c60e-ee23-40f4-a54a-e65c20dd7009\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrnt4"
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433421     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94bp\" (UniqueName: \"kubernetes.io/projected/d5f7c60e-ee23-40f4-a54a-e65c20dd7009-kube-api-access-c94bp\") pod \"kubernetes-dashboard-855c9754f9-lrnt4\" (UID: \"d5f7c60e-ee23-40f4-a54a-e65c20dd7009\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrnt4"
	Oct 25 09:14:58 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:58.345292     713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:15:01 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:01.310743     713 scope.go:117] "RemoveContainer" containerID="1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169"
	Oct 25 09:15:01 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:01.327744     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrnt4" podStartSLOduration=4.686979834 podStartE2EDuration="8.327717173s" podCreationTimestamp="2025-10-25 09:14:53 +0000 UTC" firstStartedPulling="2025-10-25 09:14:53.719883115 +0000 UTC m=+7.608907465" lastFinishedPulling="2025-10-25 09:14:57.360620442 +0000 UTC m=+11.249644804" observedRunningTime="2025-10-25 09:14:58.323207379 +0000 UTC m=+12.212231805" watchObservedRunningTime="2025-10-25 09:15:01.327717173 +0000 UTC m=+15.216741541"
	Oct 25 09:15:02 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:02.317200     713 scope.go:117] "RemoveContainer" containerID="1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169"
	Oct 25 09:15:02 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:02.317844     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:02 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:02.318039     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:03 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:03.321981     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:03 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:03.322213     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:11 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:11.187400     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:11 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:11.187631     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:21 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:21.373391     713 scope.go:117] "RemoveContainer" containerID="2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:25.223880     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:25.390125     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:25.390346     713 scope.go:117] "RemoveContainer" containerID="c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:25.390591     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:31 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:31.187963     713 scope.go:117] "RemoveContainer" containerID="c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	Oct 25 09:15:31 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:31.188207     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: kubelet.service: Consumed 1.882s CPU time.
	
	
	==> kubernetes-dashboard [cbc2c58c4b15cc3dd1f62a796ae52abc67a963715dca52306484371b9990aaf3] <==
	2025/10/25 09:14:57 Using namespace: kubernetes-dashboard
	2025/10/25 09:14:57 Using in-cluster config to connect to apiserver
	2025/10/25 09:14:57 Using secret token for csrf signing
	2025/10/25 09:14:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:14:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:14:57 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:14:57 Generating JWE encryption key
	2025/10/25 09:14:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:14:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:14:57 Initializing JWE encryption key from synchronized object
	2025/10/25 09:14:57 Creating in-cluster Sidecar client
	2025/10/25 09:14:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:14:57 Serving insecurely on HTTP port: 9090
	2025/10/25 09:15:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:14:57 Starting overwatch
	
	
	==> storage-provisioner [2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f] <==
	I1025 09:14:50.604062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:15:20.606721       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d] <==
	I1025 09:15:21.662110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:15:21.671228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:15:21.671295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:15:21.673896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:25.133192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:29.393634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:32.993592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.048138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:39.070482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:39.076548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:39.076747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:15:39.076834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11fba150-462c-4200-a429-22a97d0e0933", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-891466_f3911dbe-f151-421c-a190-ac12a965ba8b became leader
	I1025 09:15:39.076896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-891466_f3911dbe-f151-421c-a190-ac12a965ba8b!
	W1025 09:15:39.079212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:39.083478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:39.177218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-891466_f3911dbe-f151-421c-a190-ac12a965ba8b!
	W1025 09:15:41.086846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:41.092070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:43.095751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:43.100439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:45.103463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:45.108076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466: exit status 2 (373.974891ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-891466
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-891466:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d",
	        "Created": "2025-10-25T09:13:33.96941541Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:14:39.491794999Z",
	            "FinishedAt": "2025-10-25T09:14:38.636263808Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/hosts",
	        "LogPath": "/var/lib/docker/containers/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d/f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d-json.log",
	        "Name": "/default-k8s-diff-port-891466",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-891466:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-891466",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f52ce971b3b8bcf8fc5e84dfb4013ed97854bb88b6a9547b8d027c2e6a31150d",
	                "LowerDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a-init/diff:/var/lib/docker/overlay2/7f05af0a637cd4060dc2fa79b10c746a45cce499ff139bb7fd08be9daf1020a6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94107a950e9899cf1d9a586edc9d8729556af5f1cd0f9d6209b2d1bbc02a767a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-891466",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-891466/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-891466",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-891466",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-891466",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95a5e9e3d8e9a1a53a22224479a59f3b032e2d5100ad3aef45f5b731747003fc",
	            "SandboxKey": "/var/run/docker/netns/95a5e9e3d8e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-891466": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:bf:bd:f6:94:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0da8ae663923a6a96619f04827a51fa66502ca86c536d48116f797af6b2cd6f",
	                    "EndpointID": "e503a3bca52ae8e16d514ef2ff2badceda4f12737bf9e481448da026b9a0ef0d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-891466",
	                        "f52ce971b3b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466: exit status 2 (358.262011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-891466 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-891466 logs -n 25: (1.19839063s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-106968 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-036155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ stop    │ -p newest-cni-036155 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ start   │ -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:15 UTC │
	│ image   │ newest-cni-036155 image list --format=json                                                                                                                                                                                                    │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p newest-cni-036155 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p kubernetes-upgrade-497496                                                                                                                                                                                                                  │ kubernetes-upgrade-497496    │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p auto-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-687131                  │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p newest-cni-036155                                                                                                                                                                                                                          │ newest-cni-036155            │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p kindnet-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-687131               │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ image   │ embed-certs-106968 image list --format=json                                                                                                                                                                                                   │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p embed-certs-106968 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ image   │ default-k8s-diff-port-891466 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p embed-certs-106968                                                                                                                                                                                                                         │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ pause   │ -p default-k8s-diff-port-891466 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-891466 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p embed-certs-106968                                                                                                                                                                                                                         │ embed-certs-106968           │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -p calico-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-687131                │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:45.795886  288244 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:45.796033  288244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:45.796040  288244 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:45.796046  288244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:45.796329  288244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:15:45.796922  288244 out.go:368] Setting JSON to false
	I1025 09:15:45.798518  288244 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3494,"bootTime":1761380252,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:45.798597  288244 start.go:141] virtualization: kvm guest
	I1025 09:15:45.800613  288244 out.go:179] * [calico-687131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:45.802006  288244 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:15:45.802084  288244 notify.go:220] Checking for updates...
	I1025 09:15:45.804360  288244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:45.805539  288244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:15:45.806983  288244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:15:45.809097  288244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:45.810245  288244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:45.811935  288244 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:45.812091  288244 config.go:182] Loaded profile config "default-k8s-diff-port-891466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:45.812235  288244 config.go:182] Loaded profile config "kindnet-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:15:45.812347  288244 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:45.839216  288244 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:45.839316  288244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:45.908465  288244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:15:45.896439327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:45.908575  288244 docker.go:318] overlay module found
	I1025 09:15:45.911340  288244 out.go:179] * Using the docker driver based on user configuration
	I1025 09:15:45.912714  288244 start.go:305] selected driver: docker
	I1025 09:15:45.912731  288244 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:45.912744  288244 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:45.913337  288244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:45.976809  288244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:15:45.966391529 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:45.977077  288244 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:45.977379  288244 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:15:45.979490  288244 out.go:179] * Using Docker driver with root privileges
	I1025 09:15:45.980790  288244 cni.go:84] Creating CNI manager for "calico"
	I1025 09:15:45.980829  288244 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1025 09:15:45.980933  288244 start.go:349] cluster config:
	{Name:calico-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:45.982545  288244 out.go:179] * Starting "calico-687131" primary control-plane node in "calico-687131" cluster
	I1025 09:15:45.983954  288244 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:15:45.985258  288244 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:45.986413  288244 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:15:45.986451  288244 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:15:45.986474  288244 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:45.986507  288244 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:45.986584  288244 preload.go:233] Found /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:45.986597  288244 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:15:45.986706  288244 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/calico-687131/config.json ...
	I1025 09:15:45.986725  288244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/calico-687131/config.json: {Name:mk14fd252f567895e64e2af7f18cf8080bc26c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:46.011180  288244 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:15:46.011199  288244 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:15:46.011216  288244 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:15:46.011258  288244 start.go:360] acquireMachinesLock for calico-687131: {Name:mke7623a053b253bd3bd454dbcbd29fa3a6ca874 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:15:46.011609  288244 start.go:364] duration metric: took 327.794µs to acquireMachinesLock for "calico-687131"
	I1025 09:15:46.011670  288244 start.go:93] Provisioning new machine with config: &{Name:calico-687131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687131 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:15:46.011777  288244 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:15:42.581375  279928 addons.go:514] duration metric: took 680.941719ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:15:42.809605  279928 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-687131" context rescaled to 1 replicas
	W1025 09:15:44.309406  279928 node_ready.go:57] node "kindnet-687131" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:15:01 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:01.387859446Z" level=info msg="Started container" PID=1734 containerID=567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper id=6dbcb23b-3d0e-4663-8550-81c94463f504 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94a0a98e93bc9598898529a6b26b6e3ae0eacefce67d9bdabdce8c3cc8e5719c
	Oct 25 09:15:02 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:02.320832838Z" level=info msg="Removing container: 1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169" id=c92cd343-1a71-478c-8819-aa951d813d50 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:02 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:02.341634698Z" level=info msg="Removed container 1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=c92cd343-1a71-478c-8819-aa951d813d50 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.373938228Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=19c60099-dbd4-4f23-93f6-08ecf17beb16 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.426893359Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fc1dea78-ecf4-4e8f-8562-769316e6a98a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.428256909Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=68eb750b-6fc9-4a46-97c1-8b1a7bbed97c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.428405336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.447600175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.447846602Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/de45823f146340e9fa893d1e143a15212471233de5d12c27331b2182e4a86596/merged/etc/passwd: no such file or directory"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.447886398Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/de45823f146340e9fa893d1e143a15212471233de5d12c27331b2182e4a86596/merged/etc/group: no such file or directory"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.448215773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.644937831Z" level=info msg="Created container a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d: kube-system/storage-provisioner/storage-provisioner" id=68eb750b-6fc9-4a46-97c1-8b1a7bbed97c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.645792744Z" level=info msg="Starting container: a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d" id=17ec7134-8858-444c-a492-1c76e3b31aae name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:21 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:21.648250521Z" level=info msg="Started container" PID=1748 containerID=a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d description=kube-system/storage-provisioner/storage-provisioner id=17ec7134-8858-444c-a492-1c76e3b31aae name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1ddb17fba4bb43ad7641c1336a5361dfcf55ca1f23d4fb74295e1a0e16e87fc
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.224382474Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2e7acfb4-5e7c-4cbb-b7f6-2bad0b8c175f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.225497077Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=639cb57b-86a4-4f25-be63-de026345119c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.226788068Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=952fd224-2270-472e-9dac-6a8244b8b634 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.226928864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.232982527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.233501635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.269329189Z" level=info msg="Created container c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=952fd224-2270-472e-9dac-6a8244b8b634 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.270144661Z" level=info msg="Starting container: c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3" id=7c9f6499-c3e7-43ff-b2e3-40bba95c6fbe name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.273974224Z" level=info msg="Started container" PID=1764 containerID=c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper id=7c9f6499-c3e7-43ff-b2e3-40bba95c6fbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=94a0a98e93bc9598898529a6b26b6e3ae0eacefce67d9bdabdce8c3cc8e5719c
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.391534917Z" level=info msg="Removing container: 567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2" id=ff7792d9-f4fb-4a56-82ec-b31e749307df name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:15:25 default-k8s-diff-port-891466 crio[564]: time="2025-10-25T09:15:25.404825018Z" level=info msg="Removed container 567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d/dashboard-metrics-scraper" id=ff7792d9-f4fb-4a56-82ec-b31e749307df name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c403ec41066f5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   94a0a98e93bc9       dashboard-metrics-scraper-6ffb444bf9-8247d             kubernetes-dashboard
	a53aff721e253       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   f1ddb17fba4bb       storage-provisioner                                    kube-system
	cbc2c58c4b15c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   ab159ab39d4d7       kubernetes-dashboard-855c9754f9-lrnt4                  kubernetes-dashboard
	ab827ee753758       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   e3ee2159b7ee4       coredns-66bc5c9577-72zpn                               kube-system
	75b84bcae614a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   ea57a139889e5       busybox                                                default
	2198288514e04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   f1ddb17fba4bb       storage-provisioner                                    kube-system
	e1ab809e55dad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   4b7eb5b906aec       kindnet-9xc2z                                          kube-system
	2315c753ecdae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   4a65801e61542       kube-proxy-rmqbr                                       kube-system
	0b42736720451       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   8d6f42b6be632       etcd-default-k8s-diff-port-891466                      kube-system
	e554bff30a142       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   0e102f3be2fe9       kube-apiserver-default-k8s-diff-port-891466            kube-system
	fd90aba509870       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   7dfa8ac6333ee       kube-scheduler-default-k8s-diff-port-891466            kube-system
	ad9cca7cd898c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   fbfed94972ec2       kube-controller-manager-default-k8s-diff-port-891466   kube-system
	
	
	==> coredns [ab827ee7537580129a5443a427008b45db6bea12d0e1320adb16f5314fd100da] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54801 - 11762 "HINFO IN 6181491788219434077.8397239723996058082. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084392335s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-891466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-891466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=default-k8s-diff-port-891466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_13_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-891466
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:15:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:13:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:15:40 +0000   Sat, 25 Oct 2025 09:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-891466
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2fa36a04-64f2-4ad6-99cd-8fd412b795ce
	  Boot ID:                    590a8a07-3e37-4e62-94d6-23acfbec29af
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-72zpn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-default-k8s-diff-port-891466                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-9xc2z                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-891466             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-891466    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-rmqbr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-891466             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8247d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lrnt4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           114s               node-controller  Node default-k8s-diff-port-891466 event: Registered Node default-k8s-diff-port-891466 in Controller
	  Normal  NodeReady                102s               kubelet          Node default-k8s-diff-port-891466 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-891466 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-891466 event: Registered Node default-k8s-diff-port-891466 in Controller
	
	
	==> dmesg <==
	[  +0.098281] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026987] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.303163] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.012050] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023896] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023867] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +1.023854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +2.047723] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +4.031590] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[  +8.191109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000043] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[Oct25 08:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	[ +32.252571] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 9e ba e1 7e 23 84 86 8a 5e fa 59 10 08 00
	
	
	==> etcd [0b4273672045197aa9930a7861b7ea9c702bee1c1761abe1fac0ba82696ba0bb] <==
	{"level":"warn","ts":"2025-10-25T09:14:48.579222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.589412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.599878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.615957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.627694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.635487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.643368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.650569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.658089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.665121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.672611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.680613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.688821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.696051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.703448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.710005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.717391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.723889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.730758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.738479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.753838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.761364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.769251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:14:48.824558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:15:20.297530Z","caller":"traceutil/trace.go:172","msg":"trace[133654933] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"120.31102ms","start":"2025-10-25T09:15:20.177200Z","end":"2025-10-25T09:15:20.297511Z","steps":["trace[133654933] 'process raft request'  (duration: 120.175163ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:15:48 up 58 min,  0 user,  load average: 4.50, 3.43, 2.34
	Linux default-k8s-diff-port-891466 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e1ab809e55dad3c3b367621a2d2b4a7a079dcbfc73c1c5023db8aeba72f7c648] <==
	I1025 09:14:50.781339       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:14:50.781628       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:14:50.781824       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:14:50.781843       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:14:50.781868       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:14:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:14:51.075188       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:14:51.076000       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:14:51.076055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:14:51.076633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:14:51.676498       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:14:51.676519       1 metrics.go:72] Registering metrics
	I1025 09:14:51.676569       1 controller.go:711] "Syncing nftables rules"
	I1025 09:15:01.076721       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:01.076780       1 main.go:301] handling current node
	I1025 09:15:11.079809       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:11.079854       1 main.go:301] handling current node
	I1025 09:15:21.075335       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:21.075376       1 main.go:301] handling current node
	I1025 09:15:31.076774       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:31.076940       1 main.go:301] handling current node
	I1025 09:15:41.075037       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:15:41.075068       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e554bff30a14261e8aba9d0b797b3aa317f80c74e0ea6c81ce9fc3a7956a1e40] <==
	I1025 09:14:49.364595       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:14:49.364066       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:14:49.364572       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:14:49.364100       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:14:49.364903       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:14:49.365883       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:14:49.365943       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:14:49.365957       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:14:49.365964       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:14:49.365971       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:14:49.364058       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1025 09:14:49.374140       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:14:49.374555       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:14:49.391484       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:14:49.634086       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:14:49.662827       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:14:49.681251       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:14:49.688791       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:14:49.696624       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:14:49.731583       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.52.10"}
	I1025 09:14:49.742598       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.140.65"}
	I1025 09:14:50.268339       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:14:52.870366       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:14:53.115600       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:14:53.342420       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ad9cca7cd898cabfdf3a0ac2e99271e2139eef9d4a535d762fe568acfcd007ea] <==
	I1025 09:14:52.713016       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:14:52.713021       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:14:52.713047       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:14:52.713559       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:14:52.715403       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:14:52.715572       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:14:52.717897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:14:52.720199       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:14:52.722513       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:52.723609       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:14:52.723659       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:14:52.725849       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:14:52.729201       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:14:52.731467       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:14:52.731668       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:14:52.731685       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:14:52.731695       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:14:52.733434       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:14:52.736114       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:14:52.739243       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:14:52.741534       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:14:52.741618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:14:52.742952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:14:52.743047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:14:52.745322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2315c753ecdae32bd3c2309c84279ae635e349a3bd022e9ca8e253e5ad725ccb] <==
	I1025 09:14:50.634681       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:14:50.709123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:14:50.809583       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:14:50.809617       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:14:50.809709       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:14:50.829327       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:14:50.829386       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:14:50.834571       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:14:50.835101       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:14:50.835126       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:50.838565       1 config.go:200] "Starting service config controller"
	I1025 09:14:50.838602       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:14:50.838568       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:14:50.838580       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:14:50.838670       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:14:50.838678       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:14:50.838691       1 config.go:309] "Starting node config controller"
	I1025 09:14:50.838696       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:14:50.838703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:14:50.939515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:14:50.939531       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:14:50.939575       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fd90aba5098707e9b4565da4efbbb072612744bbe8babcb4796b4df48b81c1bc] <==
	I1025 09:14:47.487538       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:14:49.290591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:14:49.290630       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1025 09:14:49.290668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:14:49.290679       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:14:49.333382       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:14:49.333490       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:14:49.336351       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:49.336902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:14:49.336751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:14:49.336774       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:14:49.437186       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433256     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngzp\" (UniqueName: \"kubernetes.io/projected/5487b0fb-f7ad-42a5-a997-370f65e11e5e-kube-api-access-rngzp\") pod \"dashboard-metrics-scraper-6ffb444bf9-8247d\" (UID: \"5487b0fb-f7ad-42a5-a997-370f65e11e5e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d"
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433308     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5487b0fb-f7ad-42a5-a997-370f65e11e5e-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8247d\" (UID: \"5487b0fb-f7ad-42a5-a997-370f65e11e5e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d"
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433369     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d5f7c60e-ee23-40f4-a54a-e65c20dd7009-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lrnt4\" (UID: \"d5f7c60e-ee23-40f4-a54a-e65c20dd7009\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrnt4"
	Oct 25 09:14:53 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:53.433421     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94bp\" (UniqueName: \"kubernetes.io/projected/d5f7c60e-ee23-40f4-a54a-e65c20dd7009-kube-api-access-c94bp\") pod \"kubernetes-dashboard-855c9754f9-lrnt4\" (UID: \"d5f7c60e-ee23-40f4-a54a-e65c20dd7009\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrnt4"
	Oct 25 09:14:58 default-k8s-diff-port-891466 kubelet[713]: I1025 09:14:58.345292     713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:15:01 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:01.310743     713 scope.go:117] "RemoveContainer" containerID="1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169"
	Oct 25 09:15:01 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:01.327744     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrnt4" podStartSLOduration=4.686979834 podStartE2EDuration="8.327717173s" podCreationTimestamp="2025-10-25 09:14:53 +0000 UTC" firstStartedPulling="2025-10-25 09:14:53.719883115 +0000 UTC m=+7.608907465" lastFinishedPulling="2025-10-25 09:14:57.360620442 +0000 UTC m=+11.249644804" observedRunningTime="2025-10-25 09:14:58.323207379 +0000 UTC m=+12.212231805" watchObservedRunningTime="2025-10-25 09:15:01.327717173 +0000 UTC m=+15.216741541"
	Oct 25 09:15:02 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:02.317200     713 scope.go:117] "RemoveContainer" containerID="1f6fb75313ac335a5cc6088ee4f0e6a3b728cf746dbb0f3174aadca95a7ee169"
	Oct 25 09:15:02 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:02.317844     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:02 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:02.318039     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:03 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:03.321981     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:03 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:03.322213     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:11 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:11.187400     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:11 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:11.187631     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:21 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:21.373391     713 scope.go:117] "RemoveContainer" containerID="2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:25.223880     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:25.390125     713 scope.go:117] "RemoveContainer" containerID="567ffc7d9a7faab61266d552fad4180866d95463e78add7e23bab094b36dada2"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:25.390346     713 scope.go:117] "RemoveContainer" containerID="c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	Oct 25 09:15:25 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:25.390591     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:31 default-k8s-diff-port-891466 kubelet[713]: I1025 09:15:31.187963     713 scope.go:117] "RemoveContainer" containerID="c403ec41066f57da1ad9607c7ad2767ae691b52cddb1c318603b362b516adae3"
	Oct 25 09:15:31 default-k8s-diff-port-891466 kubelet[713]: E1025 09:15:31.188207     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8247d_kubernetes-dashboard(5487b0fb-f7ad-42a5-a997-370f65e11e5e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8247d" podUID="5487b0fb-f7ad-42a5-a997-370f65e11e5e"
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:15:43 default-k8s-diff-port-891466 systemd[1]: kubelet.service: Consumed 1.882s CPU time.
	
	
	==> kubernetes-dashboard [cbc2c58c4b15cc3dd1f62a796ae52abc67a963715dca52306484371b9990aaf3] <==
	2025/10/25 09:14:57 Using namespace: kubernetes-dashboard
	2025/10/25 09:14:57 Using in-cluster config to connect to apiserver
	2025/10/25 09:14:57 Using secret token for csrf signing
	2025/10/25 09:14:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:14:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:14:57 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:14:57 Generating JWE encryption key
	2025/10/25 09:14:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:14:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:14:57 Initializing JWE encryption key from synchronized object
	2025/10/25 09:14:57 Creating in-cluster Sidecar client
	2025/10/25 09:14:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:14:57 Serving insecurely on HTTP port: 9090
	2025/10/25 09:15:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:14:57 Starting overwatch
	
	
	==> storage-provisioner [2198288514e0414cf9b938d37034c1ced5870b2bd6cc0560d3e7362c9459416f] <==
	I1025 09:14:50.604062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:15:20.606721       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a53aff721e253ff923329bbba29a564d48a1ce701bce5e34ab657bef2b509d8d] <==
	I1025 09:15:21.662110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:15:21.671228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:15:21.671295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:15:21.673896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:25.133192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:29.393634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:32.993592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:36.048138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:39.070482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:39.076548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:39.076747       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:15:39.076834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11fba150-462c-4200-a429-22a97d0e0933", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-891466_f3911dbe-f151-421c-a190-ac12a965ba8b became leader
	I1025 09:15:39.076896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-891466_f3911dbe-f151-421c-a190-ac12a965ba8b!
	W1025 09:15:39.079212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:39.083478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:15:39.177218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-891466_f3911dbe-f151-421c-a190-ac12a965ba8b!
	W1025 09:15:41.086846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:41.092070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:43.095751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:43.100439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:45.103463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:45.108076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:47.111324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:15:47.118620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466: exit status 2 (367.733923ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.53s)
E1025 09:17:22.782222    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:22.788735    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:22.800724    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:22.822475    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:22.864344    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:22.945651    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:23.107746    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:23.429609    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:24.071151    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:25.352680    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:27.914097    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:33.035769    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.29
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 4.37
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.83
22 TestOffline 61.07
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 155.62
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 7.44
48 TestAddons/StoppedEnableDisable 16.75
49 TestCertOptions 33.59
50 TestCertExpiration 215.95
52 TestForceSystemdFlag 32.17
53 TestForceSystemdEnv 27.13
58 TestErrorSpam/setup 20.1
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 6.83
62 TestErrorSpam/unpause 5.35
63 TestErrorSpam/stop 2.62
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.24
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.33
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
75 TestFunctional/serial/CacheCmd/cache/add_local 1.16
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 44.89
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.22
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.16
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 6.34
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.11
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 25.76
101 TestFunctional/parallel/SSHCmd 0.64
102 TestFunctional/parallel/CpCmd 1.88
103 TestFunctional/parallel/MySQL 15.32
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.84
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.43
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
116 TestFunctional/parallel/MountCmd/any-port 6.25
117 TestFunctional/parallel/ProfileCmd/profile_list 0.47
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
119 TestFunctional/parallel/MountCmd/specific-port 1.75
120 TestFunctional/parallel/MountCmd/VerifyCleanup 2.16
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.26
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.48
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.19
142 TestFunctional/parallel/ImageCommands/Setup 0.97
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 147.85
163 TestMultiControlPlane/serial/DeployApp 4.22
164 TestMultiControlPlane/serial/PingHostFromPods 1.04
165 TestMultiControlPlane/serial/AddWorkerNode 25.13
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
168 TestMultiControlPlane/serial/CopyFile 17.4
169 TestMultiControlPlane/serial/StopSecondaryNode 13.3
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.33
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 111.34
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.59
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
176 TestMultiControlPlane/serial/StopCluster 47.29
177 TestMultiControlPlane/serial/RestartCluster 52.9
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 35.21
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
184 TestJSONOutput/start/Command 38.6
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.18
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
209 TestKicCustomNetwork/create_custom_network 32.74
210 TestKicCustomNetwork/use_default_bridge_network 24.79
211 TestKicExistingNetwork 24.09
212 TestKicCustomSubnet 24.62
213 TestKicStaticIP 26.46
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 47.57
218 TestMountStart/serial/StartWithMountFirst 6.01
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 5.28
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.69
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.24
225 TestMountStart/serial/RestartStopped 7.28
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 60.87
230 TestMultiNode/serial/DeployApp2Nodes 3.26
231 TestMultiNode/serial/PingHostFrom2Pods 0.73
232 TestMultiNode/serial/AddNode 23.79
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.68
235 TestMultiNode/serial/CopyFile 9.98
236 TestMultiNode/serial/StopNode 2.31
237 TestMultiNode/serial/StartAfterStop 7.23
238 TestMultiNode/serial/RestartKeepsNodes 66.53
239 TestMultiNode/serial/DeleteNode 5.29
240 TestMultiNode/serial/StopMultiNode 28.95
241 TestMultiNode/serial/RestartMultiNode 27.35
242 TestMultiNode/serial/ValidateNameConflict 24.19
247 TestPreload 88.29
249 TestScheduledStopUnix 98
252 TestInsufficientStorage 12.34
253 TestRunningBinaryUpgrade 52.95
255 TestKubernetesUpgrade 302.54
256 TestMissingContainerUpgrade 80.75
257 TestStoppedBinaryUpgrade/Setup 0.58
260 TestPause/serial/Start 55.02
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 40.82
263 TestStoppedBinaryUpgrade/Upgrade 56.43
264 TestNoKubernetes/serial/StartWithStopK8s 28.59
265 TestPause/serial/SecondStartNoReconfiguration 6.15
266 TestStoppedBinaryUpgrade/MinikubeLogs 1
275 TestNoKubernetes/serial/Start 8.21
283 TestNetworkPlugins/group/false 4.16
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
285 TestNoKubernetes/serial/ProfileList 4.52
289 TestNoKubernetes/serial/Stop 1.32
290 TestNoKubernetes/serial/StartNoArgs 7.67
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
293 TestStartStop/group/old-k8s-version/serial/FirstStart 51.33
294 TestStartStop/group/old-k8s-version/serial/DeployApp 8.26
296 TestStartStop/group/old-k8s-version/serial/Stop 16.72
298 TestStartStop/group/no-preload/serial/FirstStart 49.18
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
300 TestStartStop/group/old-k8s-version/serial/SecondStart 53.55
301 TestStartStop/group/no-preload/serial/DeployApp 7.23
303 TestStartStop/group/no-preload/serial/Stop 16.23
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 51.89
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
311 TestStartStop/group/embed-certs/serial/FirstStart 71.53
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.54
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
319 TestStartStop/group/newest-cni/serial/FirstStart 30.42
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
321 TestStartStop/group/embed-certs/serial/DeployApp 7.23
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.16
325 TestStartStop/group/embed-certs/serial/Stop 16.24
326 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/Stop 18.01
329 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
330 TestStartStop/group/embed-certs/serial/SecondStart 46.43
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
334 TestStartStop/group/newest-cni/serial/SecondStart 14.26
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
339 TestNetworkPlugins/group/auto/Start 40.31
340 TestNetworkPlugins/group/kindnet/Start 40.34
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.07
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
349 TestNetworkPlugins/group/calico/Start 50.42
350 TestNetworkPlugins/group/custom-flannel/Start 52.57
351 TestNetworkPlugins/group/auto/KubeletFlags 0.45
352 TestNetworkPlugins/group/auto/NetCatPod 10.45
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
355 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
356 TestNetworkPlugins/group/auto/DNS 0.12
357 TestNetworkPlugins/group/auto/Localhost 0.1
358 TestNetworkPlugins/group/auto/HairPin 0.1
359 TestNetworkPlugins/group/kindnet/DNS 0.14
360 TestNetworkPlugins/group/kindnet/Localhost 0.11
361 TestNetworkPlugins/group/kindnet/HairPin 0.1
362 TestNetworkPlugins/group/enable-default-cni/Start 40.63
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/Start 56.77
365 TestNetworkPlugins/group/calico/KubeletFlags 0.34
366 TestNetworkPlugins/group/calico/NetCatPod 9.24
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
369 TestNetworkPlugins/group/calico/DNS 0.13
370 TestNetworkPlugins/group/calico/Localhost 0.1
371 TestNetworkPlugins/group/calico/HairPin 0.09
372 TestNetworkPlugins/group/custom-flannel/DNS 0.13
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.25
377 TestNetworkPlugins/group/bridge/Start 60.4
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
383 TestNetworkPlugins/group/flannel/NetCatPod 8.19
384 TestNetworkPlugins/group/flannel/DNS 0.1
385 TestNetworkPlugins/group/flannel/Localhost 0.09
386 TestNetworkPlugins/group/flannel/HairPin 0.09
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
388 TestNetworkPlugins/group/bridge/NetCatPod 9.17
389 TestNetworkPlugins/group/bridge/DNS 0.15
390 TestNetworkPlugins/group/bridge/Localhost 0.09
391 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-556430 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-556430 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.285536291s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 08:29:25.850657    9473 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 08:29:25.850753    9473 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-556430
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-556430: exit status 85 (72.077785ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-556430 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-556430 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:21.617897    9485 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:21.618174    9485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:21.618185    9485 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:21.618189    9485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:21.618364    9485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	W1025 08:29:21.618486    9485 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21796-5966/.minikube/config/config.json: open /home/jenkins/minikube-integration/21796-5966/.minikube/config/config.json: no such file or directory
	I1025 08:29:21.618983    9485 out.go:368] Setting JSON to true
	I1025 08:29:21.619893    9485 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":710,"bootTime":1761380252,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:21.619979    9485 start.go:141] virtualization: kvm guest
	I1025 08:29:21.622275    9485 out.go:99] [download-only-556430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1025 08:29:21.622416    9485 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 08:29:21.622460    9485 notify.go:220] Checking for updates...
	I1025 08:29:21.623732    9485 out.go:171] MINIKUBE_LOCATION=21796
	I1025 08:29:21.625249    9485 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:21.626436    9485 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:29:21.627984    9485 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:29:21.629147    9485 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:29:21.631549    9485 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:29:21.631806    9485 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:21.657856    9485 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:29:21.657987    9485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:22.073881    9485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-25 08:29:22.061455068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:22.073983    9485 docker.go:318] overlay module found
	I1025 08:29:22.075543    9485 out.go:99] Using the docker driver based on user configuration
	I1025 08:29:22.075578    9485 start.go:305] selected driver: docker
	I1025 08:29:22.075585    9485 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:22.075699    9485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:22.140805    9485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-25 08:29:22.130558776 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:22.141024    9485 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:22.141563    9485 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 08:29:22.141756    9485 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:29:22.143577    9485 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-556430 host does not exist
	  To start a cluster, run: "minikube start -p download-only-556430"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-556430
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-894917 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-894917 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.366378102s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 08:29:30.665655    9473 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:29:30.665694    9473 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-894917
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-894917: exit status 85 (74.248554ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-556430 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-556430 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-556430                                                                                                                                                   │ download-only-556430 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-894917 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-894917 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:26.350045    9840 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:26.350260    9840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:26.350269    9840 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:26.350273    9840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:26.350457    9840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:29:26.350930    9840 out.go:368] Setting JSON to true
	I1025 08:29:26.351697    9840 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":714,"bootTime":1761380252,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:26.351794    9840 start.go:141] virtualization: kvm guest
	I1025 08:29:26.353980    9840 out.go:99] [download-only-894917] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:29:26.354104    9840 notify.go:220] Checking for updates...
	I1025 08:29:26.355380    9840 out.go:171] MINIKUBE_LOCATION=21796
	I1025 08:29:26.356844    9840 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:26.358118    9840 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:29:26.359418    9840 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:29:26.360464    9840 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:29:26.362604    9840 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:29:26.362864    9840 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:26.385866    9840 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:29:26.385936    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:26.440433    9840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-25 08:29:26.431294093 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:26.440537    9840 docker.go:318] overlay module found
	I1025 08:29:26.442144    9840 out.go:99] Using the docker driver based on user configuration
	I1025 08:29:26.442178    9840 start.go:305] selected driver: docker
	I1025 08:29:26.442185    9840 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:26.442266    9840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:26.501354    9840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-25 08:29:26.49162661 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:29:26.501525    9840 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:26.502023    9840 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 08:29:26.502178    9840 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:29:26.503984    9840 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-894917 host does not exist
	  To start a cluster, run: "minikube start -p download-only-894917"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-894917
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-298854 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-298854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-298854
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 08:29:31.823064    9473 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-499929 --alsologtostderr --binary-mirror http://127.0.0.1:44063 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-499929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-499929
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (61.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-559981 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-559981 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (58.399679986s)
helpers_test.go:175: Cleaning up "offline-crio-559981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-559981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-559981: (2.668696223s)
--- PASS: TestOffline (61.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-475995
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-475995: exit status 85 (64.567428ms)

                                                
                                                
-- stdout --
	* Profile "addons-475995" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-475995"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-475995
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-475995: exit status 85 (65.356468ms)

                                                
                                                
-- stdout --
	* Profile "addons-475995" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-475995"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (155.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-475995 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-475995 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.622714894s)
--- PASS: TestAddons/Setup (155.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-475995 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-475995 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-475995 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-475995 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bd7ad6d8-21ea-4f20-9bbd-79df26ebdc4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bd7ad6d8-21ea-4f20-9bbd-79df26ebdc4d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003814137s
addons_test.go:694: (dbg) Run:  kubectl --context addons-475995 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-475995 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-475995 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-475995
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-475995: (16.460619596s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-475995
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-475995
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-475995
--- PASS: TestAddons/StoppedEnableDisable (16.75s)

                                                
                                    
x
+
TestCertOptions (33.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-077936 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-077936 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.923070923s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-077936 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-077936 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-077936 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-077936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-077936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-077936: (2.687023177s)
--- PASS: TestCertOptions (33.59s)

                                                
                                    
x
+
TestCertExpiration (215.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-851718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-851718 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.35591353s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-851718 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.053317529s)
helpers_test.go:175: Cleaning up "cert-expiration-851718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-851718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-851718: (2.541534406s)
--- PASS: TestCertExpiration (215.95s)

                                                
                                    
x
+
TestForceSystemdFlag (32.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-742570 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-742570 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.810738835s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-742570 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-742570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-742570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-742570: (2.895134563s)
--- PASS: TestForceSystemdFlag (32.17s)

                                                
                                    
x
+
TestForceSystemdEnv (27.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-423026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-423026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.269356732s)
helpers_test.go:175: Cleaning up "force-systemd-env-423026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-423026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-423026: (2.864982444s)
--- PASS: TestForceSystemdEnv (27.13s)

                                                
                                    
x
+
TestErrorSpam/setup (20.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-016534 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-016534 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-016534 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-016534 --driver=docker  --container-runtime=crio: (20.10207442s)
--- PASS: TestErrorSpam/setup (20.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (6.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause: exit status 80 (2.334740042s)

                                                
                                                
-- stdout --
	* Pausing node nospam-016534 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:35:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause: exit status 80 (2.161679144s)

                                                
                                                
-- stdout --
	* Pausing node nospam-016534 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:35:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause: exit status 80 (2.331140778s)

                                                
                                                
-- stdout --
	* Pausing node nospam-016534 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:35:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause: exit status 80 (1.698849001s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-016534 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause: exit status 80 (1.899135059s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-016534 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:35:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause: exit status 80 (1.754841407s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-016534 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:35:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.35s)

                                                
                                    
x
+
TestErrorSpam/stop (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 stop: (2.419810746s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-016534 --log_dir /tmp/nospam-016534 stop
--- PASS: TestErrorSpam/stop (2.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21796-5966/.minikube/files/etc/test/nested/copy/9473/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-734361 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-734361 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.243239026s)
--- PASS: TestFunctional/serial/StartWithProxy (41.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 08:36:38.259820    9473 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-734361 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-734361 --alsologtostderr -v=8: (6.327435133s)
functional_test.go:678: soft start took 6.328136467s for "functional-734361" cluster.
I1025 08:36:44.587620    9473 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-734361 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-734361 /tmp/TestFunctionalserialCacheCmdcacheadd_local1927810838/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cache add minikube-local-cache-test:functional-734361
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cache delete minikube-local-cache-test:functional-734361
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-734361
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.675857ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 kubectl -- --context functional-734361 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-734361 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-734361 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 08:37:08.945145    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:08.951593    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:08.962978    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:08.984410    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:09.025839    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:09.107298    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:09.268837    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:09.590435    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:10.232522    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:11.514105    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:14.076968    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:19.198519    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:29.439871    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-734361 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.884975079s)
functional_test.go:776: restart took 44.885100504s for "functional-734361" cluster.
I1025 08:37:35.865482    9473 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (44.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-734361 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-734361 logs: (1.224372139s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 logs --file /tmp/TestFunctionalserialLogsFileCmd2249228633/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-734361 logs --file /tmp/TestFunctionalserialLogsFileCmd2249228633/001/logs.txt: (1.251584532s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-734361 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-734361
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-734361: exit status 115 (346.980822ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31720 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-734361 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 config get cpus: exit status 14 (95.573844ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 config get cpus: exit status 14 (84.574537ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-734361 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-734361 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 43810: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-734361 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-734361 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.6335ms)

                                                
                                                
-- stdout --
	* [functional-734361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:37:44.711806   43125 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:37:44.711944   43125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:37:44.711952   43125 out.go:374] Setting ErrFile to fd 2...
	I1025 08:37:44.711958   43125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:37:44.712224   43125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:37:44.712813   43125 out.go:368] Setting JSON to false
	I1025 08:37:44.714017   43125 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1213,"bootTime":1761380252,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:37:44.714145   43125 start.go:141] virtualization: kvm guest
	I1025 08:37:44.716721   43125 out.go:179] * [functional-734361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:37:44.718336   43125 notify.go:220] Checking for updates...
	I1025 08:37:44.718871   43125 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:37:44.721132   43125 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:37:44.722484   43125 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:37:44.723662   43125 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:37:44.725295   43125 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:37:44.727509   43125 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:37:44.729410   43125 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:37:44.730089   43125 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:37:44.758611   43125 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:37:44.758776   43125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:37:44.829273   43125 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 08:37:44.816377974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:37:44.829462   43125 docker.go:318] overlay module found
	I1025 08:37:44.831497   43125 out.go:179] * Using the docker driver based on existing profile
	I1025 08:37:44.832924   43125 start.go:305] selected driver: docker
	I1025 08:37:44.832942   43125 start.go:925] validating driver "docker" against &{Name:functional-734361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-734361 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:37:44.833054   43125 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:37:44.835108   43125 out.go:203] 
	W1025 08:37:44.836563   43125 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 08:37:44.837873   43125 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-734361 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-734361 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-734361 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.988278ms)

                                                
                                                
-- stdout --
	* [functional-734361] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:37:44.520860   42985 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:37:44.521100   42985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:37:44.521108   42985 out.go:374] Setting ErrFile to fd 2...
	I1025 08:37:44.521112   42985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:37:44.521408   42985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:37:44.521870   42985 out.go:368] Setting JSON to false
	I1025 08:37:44.522853   42985 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1213,"bootTime":1761380252,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:37:44.522938   42985 start.go:141] virtualization: kvm guest
	I1025 08:37:44.524986   42985 out.go:179] * [functional-734361] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 08:37:44.526197   42985 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:37:44.526201   42985 notify.go:220] Checking for updates...
	I1025 08:37:44.527593   42985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:37:44.528922   42985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 08:37:44.531318   42985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 08:37:44.532475   42985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:37:44.533677   42985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:37:44.535613   42985 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:37:44.536263   42985 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:37:44.560500   42985 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:37:44.560589   42985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:37:44.626127   42985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 08:37:44.614295812 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:37:44.626239   42985 docker.go:318] overlay module found
	I1025 08:37:44.628468   42985 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 08:37:44.629703   42985 start.go:305] selected driver: docker
	I1025 08:37:44.629719   42985 start.go:925] validating driver "docker" against &{Name:functional-734361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-734361 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:37:44.629825   42985 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:37:44.631972   42985 out.go:203] 
	W1025 08:37:44.633368   42985 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 08:37:44.634664   42985 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [922365e9-874a-439e-94bd-0cb2d701d9df] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003784001s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-734361 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-734361 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-734361 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-734361 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3b63e01c-0cb6-4e3f-bf3c-dc91b36b652f] Pending
helpers_test.go:352: "sp-pod" [3b63e01c-0cb6-4e3f-bf3c-dc91b36b652f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3b63e01c-0cb6-4e3f-bf3c-dc91b36b652f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004395145s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-734361 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-734361 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-734361 delete -f testdata/storage-provisioner/pod.yaml: (1.974672449s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-734361 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c997880a-930e-4f9f-8227-0a278890bd4f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c997880a-930e-4f9f-8227-0a278890bd4f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003274483s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-734361 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh -n functional-734361 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cp functional-734361:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2406761162/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh -n functional-734361 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh -n functional-734361 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-734361 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-t5w77" [558ddceb-57ee-4ff2-b62b-f16d29d001f2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-t5w77" [558ddceb-57ee-4ff2-b62b-f16d29d001f2] Running
I1025 08:38:03.785807    9473 detect.go:223] nested VM detected
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 12.003279016s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-734361 exec mysql-5bb876957f-t5w77 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-734361 exec mysql-5bb876957f-t5w77 -- mysql -ppassword -e "show databases;": exit status 1 (124.681124ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 08:38:08.805588    9473 retry.go:31] will retry after 1.147840911s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-734361 exec mysql-5bb876957f-t5w77 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-734361 exec mysql-5bb876957f-t5w77 -- mysql -ppassword -e "show databases;": exit status 1 (82.429453ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 08:38:10.036650    9473 retry.go:31] will retry after 1.704237028s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-734361 exec mysql-5bb876957f-t5w77 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (15.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9473/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /etc/test/nested/copy/9473/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9473.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /etc/ssl/certs/9473.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9473.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /usr/share/ca-certificates/9473.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/94732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /etc/ssl/certs/94732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/94732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /usr/share/ca-certificates/94732.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-734361 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh "sudo systemctl is-active docker": exit status 1 (304.545403ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh "sudo systemctl is-active containerd": exit status 1 (309.368239ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdany-port3062244726/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761381463026525268" to /tmp/TestFunctionalparallelMountCmdany-port3062244726/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761381463026525268" to /tmp/TestFunctionalparallelMountCmdany-port3062244726/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761381463026525268" to /tmp/TestFunctionalparallelMountCmdany-port3062244726/001/test-1761381463026525268
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.315023ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:37:43.358231    9473 retry.go:31] will retry after 588.215774ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 08:37 test-1761381463026525268
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh cat /mount-9p/test-1761381463026525268
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-734361 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6e3e706f-3287-4331-b5de-64e875fd0b19] Pending
helpers_test.go:352: "busybox-mount" [6e3e706f-3287-4331-b5de-64e875fd0b19] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6e3e706f-3287-4331-b5de-64e875fd0b19] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6e3e706f-3287-4331-b5de-64e875fd0b19] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004407528s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-734361 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdany-port3062244726/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "403.589666ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.96949ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "372.114366ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.571459ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdspecific-port1404959856/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (318.916387ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:37:49.592476    9473 retry.go:31] will retry after 298.911202ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T /mount-9p | grep 9p"
E1025 08:37:49.921155    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdspecific-port1404959856/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
I1025 08:37:50.582629    9473 detect.go:223] nested VM detected
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh "sudo umount -f /mount-9p": exit status 1 (290.948153ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-734361 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdspecific-port1404959856/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdVerifyCleanup459664964/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdVerifyCleanup459664964/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdVerifyCleanup459664964/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T" /mount1
2025/10/25 08:37:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T" /mount1: exit status 1 (354.46172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:37:51.379605    9473 retry.go:31] will retry after 627.877911ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-734361 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdVerifyCleanup459664964/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdVerifyCleanup459664964/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-734361 /tmp/TestFunctionalparallelMountCmdVerifyCleanup459664964/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-734361 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-734361 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-734361 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 45874: os: process already finished
helpers_test.go:519: unable to terminate pid 45599: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-734361 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-734361 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-734361 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [11a30e44-f940-40e7-abda-e6e80f089bb2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [11a30e44-f940-40e7-abda-e6e80f089bb2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00370453s
I1025 08:38:01.849717    9473 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-734361 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.135.120 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-734361 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-734361 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-734361 image ls --format short --alsologtostderr:
I1025 08:38:16.321223   49654 out.go:360] Setting OutFile to fd 1 ...
I1025 08:38:16.321515   49654 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.321525   49654 out.go:374] Setting ErrFile to fd 2...
I1025 08:38:16.321529   49654 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.321757   49654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
I1025 08:38:16.322294   49654 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.322378   49654 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.322767   49654 cli_runner.go:164] Run: docker container inspect functional-734361 --format={{.State.Status}}
I1025 08:38:16.340896   49654 ssh_runner.go:195] Run: systemctl --version
I1025 08:38:16.340956   49654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734361
I1025 08:38:16.358243   49654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/functional-734361/id_rsa Username:docker}
I1025 08:38:16.456263   49654 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-734361 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-734361 image ls --format table --alsologtostderr:
I1025 08:38:16.788854   49922 out.go:360] Setting OutFile to fd 1 ...
I1025 08:38:16.788960   49922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.788969   49922 out.go:374] Setting ErrFile to fd 2...
I1025 08:38:16.788973   49922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.789340   49922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
I1025 08:38:16.790908   49922 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.791064   49922 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.791422   49922 cli_runner.go:164] Run: docker container inspect functional-734361 --format={{.State.Status}}
I1025 08:38:16.811052   49922 ssh_runner.go:195] Run: systemctl --version
I1025 08:38:16.811110   49922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734361
I1025 08:38:16.831925   49922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/functional-734361/id_rsa Username:docker}
I1025 08:38:16.933236   49922 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-734361 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b46108996944
9f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b
31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.
k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c80c8dbafe7dd71fc21527912a6dd20
ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f30
80d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4
-glibc"],"size":"4631262"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-734361 image ls --format json --alsologtostderr:
I1025 08:38:16.555134   49761 out.go:360] Setting OutFile to fd 1 ...
I1025 08:38:16.555409   49761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.555420   49761 out.go:374] Setting ErrFile to fd 2...
I1025 08:38:16.555426   49761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.555698   49761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
I1025 08:38:16.556239   49761 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.556358   49761 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.556756   49761 cli_runner.go:164] Run: docker container inspect functional-734361 --format={{.State.Status}}
I1025 08:38:16.574970   49761 ssh_runner.go:195] Run: systemctl --version
I1025 08:38:16.575020   49761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734361
I1025 08:38:16.593663   49761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/functional-734361/id_rsa Username:docker}
I1025 08:38:16.699980   49761 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-734361 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-734361 image ls --format yaml --alsologtostderr:
I1025 08:38:16.321710   49655 out.go:360] Setting OutFile to fd 1 ...
I1025 08:38:16.321977   49655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.321986   49655 out.go:374] Setting ErrFile to fd 2...
I1025 08:38:16.321990   49655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.322157   49655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
I1025 08:38:16.322663   49655 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.322756   49655 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.323122   49655 cli_runner.go:164] Run: docker container inspect functional-734361 --format={{.State.Status}}
I1025 08:38:16.341141   49655 ssh_runner.go:195] Run: systemctl --version
I1025 08:38:16.341176   49655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734361
I1025 08:38:16.357943   49655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/functional-734361/id_rsa Username:docker}
I1025 08:38:16.456333   49655 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-734361 ssh pgrep buildkitd: exit status 1 (280.847223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image build -t localhost/my-image:functional-734361 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-734361 image build -t localhost/my-image:functional-734361 testdata/build --alsologtostderr: (1.666880436s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-734361 image build -t localhost/my-image:functional-734361 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8fbe03df635
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-734361
--> 7d8d367ed6e
Successfully tagged localhost/my-image:functional-734361
7d8d367ed6ef7a8ee11b7a9ea899524a6400bbeb927b1186c65562af1b83007e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-734361 image build -t localhost/my-image:functional-734361 testdata/build --alsologtostderr:
I1025 08:38:16.831413   49934 out.go:360] Setting OutFile to fd 1 ...
I1025 08:38:16.831744   49934 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.831758   49934 out.go:374] Setting ErrFile to fd 2...
I1025 08:38:16.831764   49934 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:38:16.832069   49934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
I1025 08:38:16.832758   49934 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.833500   49934 config.go:182] Loaded profile config "functional-734361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:38:16.833943   49934 cli_runner.go:164] Run: docker container inspect functional-734361 --format={{.State.Status}}
I1025 08:38:16.852799   49934 ssh_runner.go:195] Run: systemctl --version
I1025 08:38:16.852842   49934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734361
I1025 08:38:16.870538   49934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/functional-734361/id_rsa Username:docker}
I1025 08:38:16.970589   49934 build_images.go:161] Building image from path: /tmp/build.214651157.tar
I1025 08:38:16.970685   49934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 08:38:16.979243   49934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.214651157.tar
I1025 08:38:16.983178   49934 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.214651157.tar: stat -c "%s %y" /var/lib/minikube/build/build.214651157.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.214651157.tar': No such file or directory
I1025 08:38:16.983212   49934 ssh_runner.go:362] scp /tmp/build.214651157.tar --> /var/lib/minikube/build/build.214651157.tar (3072 bytes)
I1025 08:38:17.001129   49934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.214651157
I1025 08:38:17.009241   49934 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.214651157 -xf /var/lib/minikube/build/build.214651157.tar
I1025 08:38:17.017415   49934 crio.go:315] Building image: /var/lib/minikube/build/build.214651157
I1025 08:38:17.017486   49934 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-734361 /var/lib/minikube/build/build.214651157 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 08:38:18.418734   49934 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-734361 /var/lib/minikube/build/build.214651157 --cgroup-manager=cgroupfs: (1.401224561s)
I1025 08:38:18.418802   49934 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.214651157
I1025 08:38:18.427078   49934 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.214651157.tar
I1025 08:38:18.434569   49934 build_images.go:217] Built localhost/my-image:functional-734361 from /tmp/build.214651157.tar
I1025 08:38:18.434598   49934 build_images.go:133] succeeded building to: functional-734361
I1025 08:38:18.434602   49934 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls
E1025 08:38:30.883396    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:39:52.805447    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:42:08.942795    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:42:36.647514    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:47:08.942415    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-734361
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image rm kicbase/echo-server:functional-734361 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-734361 service list: (1.709337546s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-734361 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-734361 service list -o json: (1.69572485s)
functional_test.go:1504: Took "1.695829015s" to run "out/minikube-linux-amd64 -p functional-734361 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-734361
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-734361
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-734361
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (147.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m27.107970707s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (147.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 kubectl -- rollout status deployment/busybox: (1.909577294s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-jphnh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-qb7n8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-rhgnz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-jphnh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-qb7n8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-rhgnz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-jphnh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-qb7n8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-rhgnz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-jphnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-jphnh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-qb7n8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-qb7n8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-rhgnz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 kubectl -- exec busybox-7b57f96db7-rhgnz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 node add --alsologtostderr -v 5: (24.226061825s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-170054 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp testdata/cp-test.txt ha-170054:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3962870210/001/cp-test_ha-170054.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054:/home/docker/cp-test.txt ha-170054-m02:/home/docker/cp-test_ha-170054_ha-170054-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test_ha-170054_ha-170054-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054:/home/docker/cp-test.txt ha-170054-m03:/home/docker/cp-test_ha-170054_ha-170054-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test_ha-170054_ha-170054-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054:/home/docker/cp-test.txt ha-170054-m04:/home/docker/cp-test_ha-170054_ha-170054-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test_ha-170054_ha-170054-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp testdata/cp-test.txt ha-170054-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3962870210/001/cp-test_ha-170054-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m02:/home/docker/cp-test.txt ha-170054:/home/docker/cp-test_ha-170054-m02_ha-170054.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test_ha-170054-m02_ha-170054.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m02:/home/docker/cp-test.txt ha-170054-m03:/home/docker/cp-test_ha-170054-m02_ha-170054-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test_ha-170054-m02_ha-170054-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m02:/home/docker/cp-test.txt ha-170054-m04:/home/docker/cp-test_ha-170054-m02_ha-170054-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test_ha-170054-m02_ha-170054-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp testdata/cp-test.txt ha-170054-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3962870210/001/cp-test_ha-170054-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m03:/home/docker/cp-test.txt ha-170054:/home/docker/cp-test_ha-170054-m03_ha-170054.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test_ha-170054-m03_ha-170054.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m03:/home/docker/cp-test.txt ha-170054-m02:/home/docker/cp-test_ha-170054-m03_ha-170054-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test_ha-170054-m03_ha-170054-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m03:/home/docker/cp-test.txt ha-170054-m04:/home/docker/cp-test_ha-170054-m03_ha-170054-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test_ha-170054-m03_ha-170054-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp testdata/cp-test.txt ha-170054-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3962870210/001/cp-test_ha-170054-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m04:/home/docker/cp-test.txt ha-170054:/home/docker/cp-test_ha-170054-m04_ha-170054.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054 "sudo cat /home/docker/cp-test_ha-170054-m04_ha-170054.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m04:/home/docker/cp-test.txt ha-170054-m02:/home/docker/cp-test_ha-170054-m04_ha-170054-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m02 "sudo cat /home/docker/cp-test_ha-170054-m04_ha-170054-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 cp ha-170054-m04:/home/docker/cp-test.txt ha-170054-m03:/home/docker/cp-test_ha-170054-m04_ha-170054-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 ssh -n ha-170054-m03 "sudo cat /home/docker/cp-test_ha-170054-m04_ha-170054-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 node stop m02 --alsologtostderr -v 5: (12.593815877s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5: exit status 7 (710.428489ms)

                                                
                                                
-- stdout --
	ha-170054
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170054-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-170054-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-170054-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:51:37.414152   74251 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:51:37.414610   74251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:51:37.414622   74251 out.go:374] Setting ErrFile to fd 2...
	I1025 08:51:37.414626   74251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:51:37.414885   74251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:51:37.415185   74251 out.go:368] Setting JSON to false
	I1025 08:51:37.415227   74251 mustload.go:65] Loading cluster: ha-170054
	I1025 08:51:37.415394   74251 notify.go:220] Checking for updates...
	I1025 08:51:37.415952   74251 config.go:182] Loaded profile config "ha-170054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:51:37.415976   74251 status.go:174] checking status of ha-170054 ...
	I1025 08:51:37.416510   74251 cli_runner.go:164] Run: docker container inspect ha-170054 --format={{.State.Status}}
	I1025 08:51:37.436331   74251 status.go:371] ha-170054 host status = "Running" (err=<nil>)
	I1025 08:51:37.436358   74251 host.go:66] Checking if "ha-170054" exists ...
	I1025 08:51:37.436700   74251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-170054
	I1025 08:51:37.454947   74251 host.go:66] Checking if "ha-170054" exists ...
	I1025 08:51:37.455252   74251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:51:37.455315   74251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-170054
	I1025 08:51:37.474723   74251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/ha-170054/id_rsa Username:docker}
	I1025 08:51:37.574936   74251 ssh_runner.go:195] Run: systemctl --version
	I1025 08:51:37.581892   74251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:51:37.594626   74251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:51:37.652949   74251 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 08:51:37.642723682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:51:37.653547   74251 kubeconfig.go:125] found "ha-170054" server: "https://192.168.49.254:8443"
	I1025 08:51:37.653578   74251 api_server.go:166] Checking apiserver status ...
	I1025 08:51:37.653621   74251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:51:37.665994   74251 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	W1025 08:51:37.675029   74251 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:51:37.675084   74251 ssh_runner.go:195] Run: ls
	I1025 08:51:37.679111   74251 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 08:51:37.683551   74251 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 08:51:37.683575   74251 status.go:463] ha-170054 apiserver status = Running (err=<nil>)
	I1025 08:51:37.683584   74251 status.go:176] ha-170054 status: &{Name:ha-170054 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:51:37.683599   74251 status.go:174] checking status of ha-170054-m02 ...
	I1025 08:51:37.683860   74251 cli_runner.go:164] Run: docker container inspect ha-170054-m02 --format={{.State.Status}}
	I1025 08:51:37.702431   74251 status.go:371] ha-170054-m02 host status = "Stopped" (err=<nil>)
	I1025 08:51:37.702453   74251 status.go:384] host is not running, skipping remaining checks
	I1025 08:51:37.702458   74251 status.go:176] ha-170054-m02 status: &{Name:ha-170054-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:51:37.702487   74251 status.go:174] checking status of ha-170054-m03 ...
	I1025 08:51:37.702743   74251 cli_runner.go:164] Run: docker container inspect ha-170054-m03 --format={{.State.Status}}
	I1025 08:51:37.721084   74251 status.go:371] ha-170054-m03 host status = "Running" (err=<nil>)
	I1025 08:51:37.721105   74251 host.go:66] Checking if "ha-170054-m03" exists ...
	I1025 08:51:37.721362   74251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-170054-m03
	I1025 08:51:37.739539   74251 host.go:66] Checking if "ha-170054-m03" exists ...
	I1025 08:51:37.739805   74251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:51:37.739846   74251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-170054-m03
	I1025 08:51:37.757840   74251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/ha-170054-m03/id_rsa Username:docker}
	I1025 08:51:37.856466   74251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:51:37.869887   74251 kubeconfig.go:125] found "ha-170054" server: "https://192.168.49.254:8443"
	I1025 08:51:37.869914   74251 api_server.go:166] Checking apiserver status ...
	I1025 08:51:37.869946   74251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:51:37.881152   74251 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W1025 08:51:37.890013   74251 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:51:37.890072   74251 ssh_runner.go:195] Run: ls
	I1025 08:51:37.893858   74251 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 08:51:37.898014   74251 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 08:51:37.898039   74251 status.go:463] ha-170054-m03 apiserver status = Running (err=<nil>)
	I1025 08:51:37.898048   74251 status.go:176] ha-170054-m03 status: &{Name:ha-170054-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:51:37.898067   74251 status.go:174] checking status of ha-170054-m04 ...
	I1025 08:51:37.898348   74251 cli_runner.go:164] Run: docker container inspect ha-170054-m04 --format={{.State.Status}}
	I1025 08:51:37.917155   74251 status.go:371] ha-170054-m04 host status = "Running" (err=<nil>)
	I1025 08:51:37.917179   74251 host.go:66] Checking if "ha-170054-m04" exists ...
	I1025 08:51:37.917425   74251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-170054-m04
	I1025 08:51:37.936043   74251 host.go:66] Checking if "ha-170054-m04" exists ...
	I1025 08:51:37.936306   74251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:51:37.936341   74251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-170054-m04
	I1025 08:51:37.954968   74251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/ha-170054-m04/id_rsa Username:docker}
	I1025 08:51:38.051984   74251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:51:38.064481   74251 status.go:176] ha-170054-m04 status: &{Name:ha-170054-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 node start m02 --alsologtostderr -v 5: (8.363375142s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 stop --alsologtostderr -v 5
E1025 08:52:08.943278    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 stop --alsologtostderr -v 5: (51.018235s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 start --wait true --alsologtostderr -v 5
E1025 08:52:42.753526    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:42.759982    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:42.771415    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:42.792959    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:42.834428    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:42.915945    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:43.077672    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:43.399424    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:44.040937    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:45.322774    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:47.884765    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:53.006938    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:53:03.248610    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:53:23.730481    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:53:32.009315    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 start --wait true --alsologtostderr -v 5: (1m0.1868756s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 node delete m03 --alsologtostderr -v 5: (9.75714946s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 stop --alsologtostderr -v 5
E1025 08:54:04.692580    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 stop --alsologtostderr -v 5: (47.183248898s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5: exit status 7 (110.61928ms)

                                                
                                                
-- stdout --
	ha-170054
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-170054-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-170054-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:54:38.902743   88293 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:54:38.903010   88293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:54:38.903019   88293 out.go:374] Setting ErrFile to fd 2...
	I1025 08:54:38.903023   88293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:54:38.903274   88293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 08:54:38.903498   88293 out.go:368] Setting JSON to false
	I1025 08:54:38.903530   88293 mustload.go:65] Loading cluster: ha-170054
	I1025 08:54:38.903578   88293 notify.go:220] Checking for updates...
	I1025 08:54:38.903962   88293 config.go:182] Loaded profile config "ha-170054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:54:38.903981   88293 status.go:174] checking status of ha-170054 ...
	I1025 08:54:38.904434   88293 cli_runner.go:164] Run: docker container inspect ha-170054 --format={{.State.Status}}
	I1025 08:54:38.922865   88293 status.go:371] ha-170054 host status = "Stopped" (err=<nil>)
	I1025 08:54:38.922885   88293 status.go:384] host is not running, skipping remaining checks
	I1025 08:54:38.922891   88293 status.go:176] ha-170054 status: &{Name:ha-170054 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:54:38.922910   88293 status.go:174] checking status of ha-170054-m02 ...
	I1025 08:54:38.923144   88293 cli_runner.go:164] Run: docker container inspect ha-170054-m02 --format={{.State.Status}}
	I1025 08:54:38.940235   88293 status.go:371] ha-170054-m02 host status = "Stopped" (err=<nil>)
	I1025 08:54:38.940272   88293 status.go:384] host is not running, skipping remaining checks
	I1025 08:54:38.940281   88293 status.go:176] ha-170054-m02 status: &{Name:ha-170054-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:54:38.940304   88293 status.go:174] checking status of ha-170054-m04 ...
	I1025 08:54:38.940555   88293 cli_runner.go:164] Run: docker container inspect ha-170054-m04 --format={{.State.Status}}
	I1025 08:54:38.957513   88293 status.go:371] ha-170054-m04 host status = "Stopped" (err=<nil>)
	I1025 08:54:38.957535   88293 status.go:384] host is not running, skipping remaining checks
	I1025 08:54:38.957541   88293 status.go:176] ha-170054-m04 status: &{Name:ha-170054-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (52.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 08:55:26.614526    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.079947291s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (52.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-170054 node add --control-plane --alsologtostderr -v 5: (34.272598503s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-170054 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-550540 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-550540 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.600724984s)
--- PASS: TestJSONOutput/start/Command (38.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-550540 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-550540 --output=json --user=testUser: (6.177774036s)
--- PASS: TestJSONOutput/stop/Command (6.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-707914 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-707914 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.573461ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfafc970-11e6-4f41-8572-08f2e86a537a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-707914] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"21f20188-9dbf-4da1-a618-9f08c6006151","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21796"}}
	{"specversion":"1.0","id":"49b46aea-d262-4178-8c76-c5f9896bbd59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb646314-fa31-438c-ad9b-0f2b5e7740b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig"}}
	{"specversion":"1.0","id":"5ad5cf2a-3e1b-4e05-971a-2b9b8d75b0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube"}}
	{"specversion":"1.0","id":"10a1235b-b7d6-4ff1-9f1c-f59e0b07bb0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"72c0245a-d2d0-41f0-9321-b0485a925d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d0f49300-9363-47c4-8135-70c60d62f103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-707914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-707914
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-148520 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-148520 --network=: (30.513913066s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-148520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-148520
E1025 08:57:42.752840    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-148520: (2.200710397s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-178620 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-178620 --network=bridge: (22.755707281s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-178620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-178620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-178620: (2.01900203s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.79s)

                                                
                                    
x
+
TestKicExistingNetwork (24.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 08:58:07.997151    9473 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 08:58:08.014215    9473 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 08:58:08.014285    9473 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 08:58:08.014308    9473 cli_runner.go:164] Run: docker network inspect existing-network
W1025 08:58:08.031886    9473 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 08:58:08.031923    9473 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 08:58:08.031943    9473 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 08:58:08.032097    9473 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 08:58:08.050563    9473 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b88230a1ccb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:f2:b0:df:6b:9b} reservation:<nil>}
I1025 08:58:08.051057    9473 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00195d1e0}
I1025 08:58:08.051100    9473 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1025 08:58:08.051163    9473 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 08:58:08.106743    9473 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-767443 --network=existing-network
E1025 08:58:10.457051    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-767443 --network=existing-network: (21.92213842s)
helpers_test.go:175: Cleaning up "existing-network-767443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-767443
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-767443: (2.023790182s)
I1025 08:58:32.070882    9473 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.09s)

                                                
                                    
x
+
TestKicCustomSubnet (24.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-911144 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-911144 --subnet=192.168.60.0/24: (22.421810023s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-911144 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-911144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-911144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-911144: (2.181401006s)
--- PASS: TestKicCustomSubnet (24.62s)

                                                
                                    
x
+
TestKicStaticIP (26.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-226319 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-226319 --static-ip=192.168.200.200: (24.123757599s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-226319 ip
helpers_test.go:175: Cleaning up "static-ip-226319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-226319
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-226319: (2.180998573s)
--- PASS: TestKicStaticIP (26.46s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-768902 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-768902 --driver=docker  --container-runtime=crio: (20.722381854s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-771539 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-771539 --driver=docker  --container-runtime=crio: (20.931676194s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-768902
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-771539
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-771539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-771539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-771539: (2.323837655s)
helpers_test.go:175: Cleaning up "first-768902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-768902
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-768902: (2.358455619s)
--- PASS: TestMinikubeProfile (47.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-200676 --memory=3072 --mount-string /tmp/TestMountStartserial3192004181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-200676 --memory=3072 --mount-string /tmp/TestMountStartserial3192004181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.008928424s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-200676 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-216417 --memory=3072 --mount-string /tmp/TestMountStartserial3192004181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-216417 --memory=3072 --mount-string /tmp/TestMountStartserial3192004181/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.278262928s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-216417 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-200676 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-200676 --alsologtostderr -v=5: (1.694601482s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-216417 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-216417
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-216417: (1.244654584s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-216417
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-216417: (6.277772512s)
--- PASS: TestMountStart/serial/RestartStopped (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-216417 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844306 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844306 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.376037113s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-844306 -- rollout status deployment/busybox: (1.800288923s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-c44q2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-cmr56 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-c44q2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-cmr56 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-c44q2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-cmr56 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-c44q2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-c44q2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-cmr56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-844306 -- exec busybox-7b57f96db7-cmr56 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-844306 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-844306 -v=5 --alsologtostderr: (23.118011063s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-844306 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp testdata/cp-test.txt multinode-844306:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3328556480/001/cp-test_multinode-844306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306:/home/docker/cp-test.txt multinode-844306-m02:/home/docker/cp-test_multinode-844306_multinode-844306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m02 "sudo cat /home/docker/cp-test_multinode-844306_multinode-844306-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306:/home/docker/cp-test.txt multinode-844306-m03:/home/docker/cp-test_multinode-844306_multinode-844306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m03 "sudo cat /home/docker/cp-test_multinode-844306_multinode-844306-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp testdata/cp-test.txt multinode-844306-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m02 "sudo cat /home/docker/cp-test.txt"
E1025 09:02:08.942606    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3328556480/001/cp-test_multinode-844306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306-m02:/home/docker/cp-test.txt multinode-844306:/home/docker/cp-test_multinode-844306-m02_multinode-844306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306 "sudo cat /home/docker/cp-test_multinode-844306-m02_multinode-844306.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306-m02:/home/docker/cp-test.txt multinode-844306-m03:/home/docker/cp-test_multinode-844306-m02_multinode-844306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m03 "sudo cat /home/docker/cp-test_multinode-844306-m02_multinode-844306-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp testdata/cp-test.txt multinode-844306-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3328556480/001/cp-test_multinode-844306-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306-m03:/home/docker/cp-test.txt multinode-844306:/home/docker/cp-test_multinode-844306-m03_multinode-844306.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306 "sudo cat /home/docker/cp-test_multinode-844306-m03_multinode-844306.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 cp multinode-844306-m03:/home/docker/cp-test.txt multinode-844306-m02:/home/docker/cp-test_multinode-844306-m03_multinode-844306-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 ssh -n multinode-844306-m02 "sudo cat /home/docker/cp-test_multinode-844306-m03_multinode-844306-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-844306 node stop m03: (1.277181336s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844306 status: exit status 7 (518.112199ms)

                                                
                                                
-- stdout --
	multinode-844306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-844306-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-844306-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr: exit status 7 (510.055661ms)

                                                
                                                
-- stdout --
	multinode-844306
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-844306-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-844306-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:16.450870  147869 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:16.450998  147869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:16.451008  147869 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:16.451015  147869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:16.451233  147869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:02:16.451432  147869 out.go:368] Setting JSON to false
	I1025 09:02:16.451473  147869 mustload.go:65] Loading cluster: multinode-844306
	I1025 09:02:16.451750  147869 notify.go:220] Checking for updates...
	I1025 09:02:16.452533  147869 config.go:182] Loaded profile config "multinode-844306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:16.452562  147869 status.go:174] checking status of multinode-844306 ...
	I1025 09:02:16.453446  147869 cli_runner.go:164] Run: docker container inspect multinode-844306 --format={{.State.Status}}
	I1025 09:02:16.475867  147869 status.go:371] multinode-844306 host status = "Running" (err=<nil>)
	I1025 09:02:16.475896  147869 host.go:66] Checking if "multinode-844306" exists ...
	I1025 09:02:16.476164  147869 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-844306
	I1025 09:02:16.494634  147869 host.go:66] Checking if "multinode-844306" exists ...
	I1025 09:02:16.494958  147869 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:02:16.494992  147869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-844306
	I1025 09:02:16.513023  147869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/multinode-844306/id_rsa Username:docker}
	I1025 09:02:16.611348  147869 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:16.617885  147869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:02:16.630763  147869 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:02:16.686584  147869 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:02:16.676461527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:02:16.687141  147869 kubeconfig.go:125] found "multinode-844306" server: "https://192.168.67.2:8443"
	I1025 09:02:16.687168  147869 api_server.go:166] Checking apiserver status ...
	I1025 09:02:16.687205  147869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:02:16.698701  147869 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	W1025 09:02:16.707456  147869 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:02:16.707503  147869 ssh_runner.go:195] Run: ls
	I1025 09:02:16.711280  147869 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 09:02:16.715572  147869 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 09:02:16.715610  147869 status.go:463] multinode-844306 apiserver status = Running (err=<nil>)
	I1025 09:02:16.715621  147869 status.go:176] multinode-844306 status: &{Name:multinode-844306 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:02:16.715650  147869 status.go:174] checking status of multinode-844306-m02 ...
	I1025 09:02:16.715981  147869 cli_runner.go:164] Run: docker container inspect multinode-844306-m02 --format={{.State.Status}}
	I1025 09:02:16.735466  147869 status.go:371] multinode-844306-m02 host status = "Running" (err=<nil>)
	I1025 09:02:16.735487  147869 host.go:66] Checking if "multinode-844306-m02" exists ...
	I1025 09:02:16.735774  147869 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-844306-m02
	I1025 09:02:16.754010  147869 host.go:66] Checking if "multinode-844306-m02" exists ...
	I1025 09:02:16.754283  147869 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:02:16.754326  147869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-844306-m02
	I1025 09:02:16.772570  147869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21796-5966/.minikube/machines/multinode-844306-m02/id_rsa Username:docker}
	I1025 09:02:16.869846  147869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:02:16.882086  147869 status.go:176] multinode-844306-m02 status: &{Name:multinode-844306-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:02:16.882129  147869 status.go:174] checking status of multinode-844306-m03 ...
	I1025 09:02:16.882396  147869 cli_runner.go:164] Run: docker container inspect multinode-844306-m03 --format={{.State.Status}}
	I1025 09:02:16.901345  147869 status.go:371] multinode-844306-m03 host status = "Stopped" (err=<nil>)
	I1025 09:02:16.901370  147869 status.go:384] host is not running, skipping remaining checks
	I1025 09:02:16.901376  147869 status.go:176] multinode-844306-m03 status: &{Name:multinode-844306-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-844306 node start m03 -v=5 --alsologtostderr: (6.510560539s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (66.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-844306
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-844306
E1025 09:02:42.755158    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-844306: (29.557198149s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844306 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844306 --wait=true -v=5 --alsologtostderr: (36.837346262s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-844306
--- PASS: TestMultiNode/serial/RestartKeepsNodes (66.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-844306 node delete m03: (4.684309582s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-844306 stop: (28.753316522s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844306 status: exit status 7 (99.343011ms)

                                                
                                                
-- stdout --
	multinode-844306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-844306-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr: exit status 7 (96.066308ms)

                                                
                                                
-- stdout --
	multinode-844306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-844306-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:04:04.858442  157496 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:04:04.858712  157496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:04:04.858721  157496 out.go:374] Setting ErrFile to fd 2...
	I1025 09:04:04.858725  157496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:04:04.858919  157496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:04:04.859081  157496 out.go:368] Setting JSON to false
	I1025 09:04:04.859111  157496 mustload.go:65] Loading cluster: multinode-844306
	I1025 09:04:04.859224  157496 notify.go:220] Checking for updates...
	I1025 09:04:04.859446  157496 config.go:182] Loaded profile config "multinode-844306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:04:04.859459  157496 status.go:174] checking status of multinode-844306 ...
	I1025 09:04:04.859935  157496 cli_runner.go:164] Run: docker container inspect multinode-844306 --format={{.State.Status}}
	I1025 09:04:04.878288  157496 status.go:371] multinode-844306 host status = "Stopped" (err=<nil>)
	I1025 09:04:04.878321  157496 status.go:384] host is not running, skipping remaining checks
	I1025 09:04:04.878330  157496 status.go:176] multinode-844306 status: &{Name:multinode-844306 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:04:04.878365  157496 status.go:174] checking status of multinode-844306-m02 ...
	I1025 09:04:04.878731  157496 cli_runner.go:164] Run: docker container inspect multinode-844306-m02 --format={{.State.Status}}
	I1025 09:04:04.896338  157496 status.go:371] multinode-844306-m02 host status = "Stopped" (err=<nil>)
	I1025 09:04:04.896360  157496 status.go:384] host is not running, skipping remaining checks
	I1025 09:04:04.896366  157496 status.go:176] multinode-844306-m02 status: &{Name:multinode-844306-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (27.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844306 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844306 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (26.724414323s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-844306 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (27.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-844306
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844306-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-844306-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.70069ms)

                                                
                                                
-- stdout --
	* [multinode-844306-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-844306-m02' is duplicated with machine name 'multinode-844306-m02' in profile 'multinode-844306'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-844306-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-844306-m03 --driver=docker  --container-runtime=crio: (21.357810834s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-844306
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-844306: exit status 80 (296.59388ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-844306 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-844306-m03 already exists in multinode-844306-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-844306-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-844306-m03: (2.400249003s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.19s)

                                                
                                    
x
+
TestPreload (88.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-864268 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-864268 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.02698143s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-864268 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-864268
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-864268: (5.984268428s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-864268 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-864268 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (31.670481239s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-864268 image list
helpers_test.go:175: Cleaning up "test-preload-864268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-864268
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-864268: (2.468565422s)
--- PASS: TestPreload (88.29s)

                                                
                                    
x
+
TestScheduledStopUnix (98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-344499 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-344499 --memory=3072 --driver=docker  --container-runtime=crio: (21.686911904s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-344499 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-344499 -n scheduled-stop-344499
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-344499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 09:06:51.009291    9473 retry.go:31] will retry after 142.144µs: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.010467    9473 retry.go:31] will retry after 192.12µs: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.011591    9473 retry.go:31] will retry after 230.892µs: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.012697    9473 retry.go:31] will retry after 285.826µs: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.013847    9473 retry.go:31] will retry after 373.403µs: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.014990    9473 retry.go:31] will retry after 947.148µs: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.016129    9473 retry.go:31] will retry after 1.573785ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.018310    9473 retry.go:31] will retry after 1.058264ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.019439    9473 retry.go:31] will retry after 2.687094ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.022619    9473 retry.go:31] will retry after 5.696221ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.028828    9473 retry.go:31] will retry after 7.68375ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.037058    9473 retry.go:31] will retry after 7.681431ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.045304    9473 retry.go:31] will retry after 16.080868ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.061511    9473 retry.go:31] will retry after 21.70577ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.083841    9473 retry.go:31] will retry after 17.372234ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
I1025 09:06:51.102179    9473 retry.go:31] will retry after 50.138114ms: open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/scheduled-stop-344499/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-344499 --cancel-scheduled
E1025 09:07:08.942933    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-344499 -n scheduled-stop-344499
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-344499
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-344499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1025 09:07:42.755428    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-344499
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-344499: exit status 7 (78.672281ms)

                                                
                                                
-- stdout --
	scheduled-stop-344499
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-344499 -n scheduled-stop-344499
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-344499 -n scheduled-stop-344499: exit status 7 (76.33978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-344499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-344499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-344499: (4.775935494s)
--- PASS: TestScheduledStopUnix (98.00s)

                                                
                                    
x
+
TestInsufficientStorage (12.34s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-791576 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-791576 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.830272145s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5686714-07f8-41eb-9fa3-0e0e905d77f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-791576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e7f62b0-7a72-4b7f-9608-ebfbac18fe75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21796"}}
	{"specversion":"1.0","id":"87b91cb2-e372-4c4e-8de1-780fbf388c2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"09e72c05-51bf-4db9-b846-094f0d2e7d80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig"}}
	{"specversion":"1.0","id":"a04e0457-4602-4eba-a975-50d296114d4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube"}}
	{"specversion":"1.0","id":"a53e57fd-9350-4839-a8b9-9dc6bd19afd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"db183178-a7f6-41a9-b0b5-c1adb9f6a5fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"12789c2f-884a-4961-ba7f-f930e7fa23f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8ba1cf58-9a5f-4245-b80d-b38b8f9af017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c0e8c7ed-3b5a-420d-9577-a3375a5cb870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9df6b9b-31e9-4070-a06e-ae0b1e44b36d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7a5633a7-e0a0-474a-b570-7609d0ecff48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-791576\" primary control-plane node in \"insufficient-storage-791576\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"457985f4-1069-42a7-a829-107a6ab673c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"07370d6e-489f-495e-b0a9-b236ff5f795a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"50dfcfb5-7136-4cbe-ac05-00e3794de9a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-791576 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-791576 --output=json --layout=cluster: exit status 7 (290.855924ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-791576","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-791576","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 09:08:16.962367  177609 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-791576" does not appear in /home/jenkins/minikube-integration/21796-5966/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-791576 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-791576 --output=json --layout=cluster: exit status 7 (294.976263ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-791576","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-791576","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 09:08:17.258205  177718 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-791576" does not appear in /home/jenkins/minikube-integration/21796-5966/kubeconfig
	E1025 09:08:17.268783  177718 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/insufficient-storage-791576/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-791576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-791576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-791576: (1.921567688s)
--- PASS: TestInsufficientStorage (12.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.424994982 start -p running-upgrade-462303 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.424994982 start -p running-upgrade-462303 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.71271309s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-462303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-462303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.654820217s)
helpers_test.go:175: Cleaning up "running-upgrade-462303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-462303
E1025 09:10:12.011787    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-462303: (2.942298298s)
--- PASS: TestRunningBinaryUpgrade (52.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.380721747s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-497496
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-497496: (2.300684106s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-497496 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-497496 status --format={{.Host}}: exit status 7 (79.931912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.668027954s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-497496 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (99.240817ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-497496] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-497496
	    minikube start -p kubernetes-upgrade-497496 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4974962 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-497496 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497496 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.282026585s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-497496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-497496
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-497496: (2.670313004s)
--- PASS: TestKubernetesUpgrade (302.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (80.75s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1968703871 start -p missing-upgrade-047620 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1968703871 start -p missing-upgrade-047620 --memory=3072 --driver=docker  --container-runtime=crio: (26.624559716s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-047620
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-047620: (12.964385189s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-047620
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-047620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.187051435s)
helpers_test.go:175: Cleaning up "missing-upgrade-047620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-047620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-047620: (2.421004306s)
--- PASS: TestMissingContainerUpgrade (80.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestPause/serial/Start (55.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-613858 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-613858 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.020175344s)
--- PASS: TestPause/serial/Start (55.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629442 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-629442 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.300114ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-629442] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629442 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629442 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.469482496s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-629442 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1873764339 start -p stopped-upgrade-626100 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1873764339 start -p stopped-upgrade-626100 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.672664312s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1873764339 -p stopped-upgrade-626100 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1873764339 -p stopped-upgrade-626100 stop: (2.412508132s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-626100 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1025 09:09:05.819067    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-626100 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.347998276s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (56.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.079666765s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-629442 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-629442 status -o json: exit status 2 (369.286338ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-629442","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-629442
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-629442: (6.141617665s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-613858 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-613858 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.136543211s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-626100
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-626100: (1.001139875s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629442 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.208260558s)
--- PASS: TestNoKubernetes/serial/Start (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-687131 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-687131 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (193.798956ms)

                                                
                                                
-- stdout --
	* [false-687131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:09:36.696797  202216 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:09:36.697115  202216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:36.697127  202216 out.go:374] Setting ErrFile to fd 2...
	I1025 09:09:36.697133  202216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:36.697379  202216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5966/.minikube/bin
	I1025 09:09:36.698112  202216 out.go:368] Setting JSON to false
	I1025 09:09:36.699584  202216 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3125,"bootTime":1761380252,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:09:36.699753  202216 start.go:141] virtualization: kvm guest
	I1025 09:09:36.701969  202216 out.go:179] * [false-687131] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:09:36.703283  202216 notify.go:220] Checking for updates...
	I1025 09:09:36.703301  202216 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:09:36.706119  202216 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:09:36.707719  202216 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5966/kubeconfig
	I1025 09:09:36.709167  202216 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5966/.minikube
	I1025 09:09:36.710387  202216 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:09:36.711670  202216 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:09:36.713551  202216 config.go:182] Loaded profile config "NoKubernetes-629442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:09:36.713711  202216 config.go:182] Loaded profile config "force-systemd-flag-742570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:09:36.713841  202216 config.go:182] Loaded profile config "running-upgrade-462303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 09:09:36.713962  202216 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:09:36.741368  202216 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:09:36.741477  202216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:09:36.817631  202216 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 09:09:36.804739824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:09:36.817802  202216 docker.go:318] overlay module found
	I1025 09:09:36.819670  202216 out.go:179] * Using the docker driver based on user configuration
	I1025 09:09:36.821276  202216 start.go:305] selected driver: docker
	I1025 09:09:36.821290  202216 start.go:925] validating driver "docker" against <nil>
	I1025 09:09:36.821300  202216 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:09:36.823310  202216 out.go:203] 
	W1025 09:09:36.824501  202216 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 09:09:36.825663  202216 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-687131 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-687131" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-687131

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687131"

                                                
                                                
----------------------- debugLogs end: false-687131 [took: 3.779204067s] --------------------------------
helpers_test.go:175: Cleaning up "false-687131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-687131
--- PASS: TestNetworkPlugins/group/false (4.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-629442 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-629442 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.985342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.795823203s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-629442
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-629442: (1.318005829s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629442 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629442 --driver=docker  --container-runtime=crio: (7.672279353s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-629442 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-629442 "sudo systemctl is-active --quiet service kubelet": exit status 1 (325.754659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.330951325s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-959110 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce] Pending
helpers_test.go:352: "busybox" [2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2b47d91d-7ebf-45e5-b9ce-8dc6ba11c2ce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003699573s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-959110 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-959110 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-959110 --alsologtostderr -v=3: (16.719995252s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.183116679s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110: exit status 7 (85.760091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-959110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1025 09:12:08.942796    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/addons-475995/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-959110 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.219061892s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-959110 -n old-k8s-version-959110
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-016092 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [acbe50c4-9fa3-499e-8b25-b374b1be96f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [acbe50c4-9fa3-499e-8b25-b374b1be96f9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003265877s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-016092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-016092 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-016092 --alsologtostderr -v=3: (16.229811727s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fl9k8" [ea4be496-b6f5-4cc7-8474-a67d52eee0df] Running
E1025 09:12:42.752841    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004324362s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fl9k8" [ea4be496-b6f5-4cc7-8474-a67d52eee0df] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004127017s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-959110 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092: exit status 7 (84.892564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-016092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-016092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.515946886s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016092 -n no-preload-016092
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-959110 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.531289735s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.544289768s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jnwc4" [2d30e5f2-2721-44b1-bd1f-e3da225a334d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003741712s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jnwc4" [2d30e5f2-2721-44b1-bd1f-e3da225a334d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003940968s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-016092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-016092 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.422179714s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2a8cbb66-d3e8-45f9-aa54-4adc15127a32] Pending
helpers_test.go:352: "busybox" [2a8cbb66-d3e8-45f9-aa54-4adc15127a32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2a8cbb66-d3e8-45f9-aa54-4adc15127a32] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003547717s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-106968 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe] Pending
helpers_test.go:352: "busybox" [05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [05ff451f-6a2b-4a5f-a0ee-6b04e30d84fe] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003907144s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-106968 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-891466 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-891466 --alsologtostderr -v=3: (18.155441572s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-106968 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-106968 --alsologtostderr -v=3: (16.244743199s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-036155 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-036155 --alsologtostderr -v=3: (18.005864324s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968: exit status 7 (84.771531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-106968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-106968 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.075981571s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-106968 -n embed-certs-106968
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466: exit status 7 (79.543567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-891466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-891466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.644707914s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-891466 -n default-k8s-diff-port-891466
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155: exit status 7 (103.71173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-036155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-036155 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (13.906973638s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-036155 -n newest-cni-036155
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-036155 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.309661454s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.344814242s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bffzw" [2fc2bae6-0701-44da-a1f7-2d8a0104adaa] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003685879s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bffzw" [2fc2bae6-0701-44da-a1f7-2d8a0104adaa] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005363279s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-106968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lrnt4" [d5f7c60e-ee23-40f4-a54a-e65c20dd7009] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005970266s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-106968 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lrnt4" [d5f7c60e-ee23-40f4-a54a-e65c20dd7009] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00462222s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-891466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-891466 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.415038374s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.573668206s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-687131 "pgrep -a kubelet"
I1025 09:15:56.277340    9473 config.go:182] Loaded profile config "auto-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-687131 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zczzd" [9cff968a-fbf2-42b3-a6c1-6f9d23504ded] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zczzd" [9cff968a-fbf2-42b3-a6c1-6f9d23504ded] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00363129s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-gwl4p" [fa62cbd8-3e03-4dcd-b66c-940fb374ec0c] Running
I1025 09:15:56.680719    9473 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003799008s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-687131 "pgrep -a kubelet"
I1025 09:16:02.606173    9473 config.go:182] Loaded profile config "kindnet-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-687131 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6smwd" [00cc5a28-2e92-49c3-8d8a-bbe4e2a19688] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6smwd" [00cc5a28-2e92-49c3-8d8a-bbe4e2a19688] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003222421s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (40.633150186s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-qwkc7" [d01c55d5-cbf6-44a1-81c8-217e73ae5962] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-qwkc7" [d01c55d5-cbf6-44a1-81c8-217e73ae5962] Running
E1025 09:16:38.509784    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00415496s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.773815322s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-687131 "pgrep -a kubelet"
I1025 09:16:42.491470    9473 config.go:182] Loaded profile config "calico-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-687131 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2hbms" [9ddf72d6-0c0e-41de-9cd7-4966885da920] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2hbms" [9ddf72d6-0c0e-41de-9cd7-4966885da920] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004104844s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-687131 "pgrep -a kubelet"
I1025 09:16:45.501448    9473 config.go:182] Loaded profile config "custom-flannel-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-687131 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7c8x8" [a45bfc57-3968-47e0-9387-71d17e2a89fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7c8x8" [a45bfc57-3968-47e0-9387-71d17e2a89fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003823395s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-687131 "pgrep -a kubelet"
I1025 09:17:10.695558    9473 config.go:182] Loaded profile config "enable-default-cni-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-687131 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-br8w4" [9b5f9a49-a344-40e1-b076-700846ed0043] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-br8w4" [9b5f9a49-a344-40e1-b076-700846ed0043] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003477491s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-687131 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m0.396459916s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-hfjwq" [4f2c4508-e7da-41cf-8fcc-5b4fd0a9d56e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004350884s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-687131 "pgrep -a kubelet"
I1025 09:17:39.797441    9473 config.go:182] Loaded profile config "flannel-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-687131 replace --force -f testdata/netcat-deployment.yaml
E1025 09:17:39.953456    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/old-k8s-version-959110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gkr4x" [eaa867e5-71e2-4ad5-9a69-e243d683d081] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gkr4x" [eaa867e5-71e2-4ad5-9a69-e243d683d081] Running
E1025 09:17:42.752750    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/functional-734361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:43.277789    9473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5966/.minikube/profiles/no-preload-016092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004091086s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-687131 "pgrep -a kubelet"
I1025 09:18:15.081447    9473 config.go:182] Loaded profile config "bridge-687131": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-687131 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8cpk6" [d9fba37e-5b1d-4ed9-83c6-dc229bf22ced] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8cpk6" [d9fba37e-5b1d-4ed9-83c6-dc229bf22ced] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003638019s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-687131 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-687131 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-664368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-664368
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-687131 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-687131" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-687131

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687131"

                                                
                                                
----------------------- debugLogs end: kubenet-687131 [took: 3.63420648s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-687131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-687131
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-687131 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-687131" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-687131

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-687131" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687131"

                                                
                                                
----------------------- debugLogs end: cilium-687131 [took: 4.572796549s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-687131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-687131
--- SKIP: TestNetworkPlugins/group/cilium (4.78s)

                                                
                                    
Copied to clipboard